code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.1 64-bit (system)
# name: python3
# ---
# # Bootstrap Method for Two Samples
# This notebook will introduce the bootstrap method for comparison two groups.
import numpy as np
import pandas as pd
#from scipy import stats as st
#from statsmodels.stats import weightstats as stests
from statsmodels.distributions.empirical_distribution import ECDF
import matplotlib.pyplot as plt
import seaborn as sns;
sns.set_style("whitegrid")
# ## Bootstrap method for two independent samples
# The bootstrap method idea is simple: "If there is no difference between two treatments, a particular score is just as likely to end up in one group as in the other."
#
# The bootstrap method allows us to make inferences on the population from which the data are a random sample, regardless of the shape of the population distribution.
#
# The first step is to transform the two samples we want to compare into others that share the mean.
# Let's work with classes C and D.
#
# We will test `Ho: mean(classC) = mean(classD)` without assuming equal variances. We need estimates of classC and classD that use only the assumption of the common mean.
#
# Let's `x` be the combined sample. `x` is the result of concatenating classes C and D.
# Let's generate values for classes C and D:
np.random.seed(123)
classC = np.random.normal(85, 3, 100)
classD = np.random.normal(90, 3, 95)
# Notice that the means of `classC` and `classD` are different.
ax=sns.kdeplot(x=classC, color='limegreen', shade=True, label='Class C')
ax.vlines(x=np.mean(classC), ymin=0, ymax=0.092, color='black')
ax=sns.kdeplot(x=classD, color='orange', shade=True, alpha=0.3, label='Class D')
ax.vlines(x=np.mean(classD), ymin=0, ymax=0.154, color='black')
plt.legend();
print('Mean of class C = %.2f' % np.mean(classC))
print('Mean of class D = %.2f' % np.mean(classD))
# Notice that the number of students in each class is also different.
print('Lenght of class C =', len(classC))
print('Lenght of class D =', len(classD))
# Let's transform the original samples into others that share their means.
def combined_mean(sample1, sample2):
'''
It concatenates sample1 with sample2
and returns the combined mean.
'''
x = np.concatenate((sample1, sample2))
return(np.mean(x))
cmean = combined_mean(classC, classD)
print('Mean of combined sample = %.2f' % cmean)
# Creating classCt (class C transformed) and classDt (class D transformed)
classCt = classC - np.mean(classC) + cmean
classDt = classD - np.mean(classD) + cmean
# +
ax=sns.kdeplot(x=classCt, shade=True, color='limegreen', label='Class C transf')
ax.vlines(x=np.mean(classCt), ymin=0, ymax=0.092, color='black')
ax=sns.kdeplot(x=classDt, shade=True, color='orange', alpha=0.3, label='Class D transf')
ax.vlines(x=np.mean(classDt), ymin=0, ymax=0.154, color='black')
plt.legend();
# -
print('Mean of shifted class C = %.2f' % np.mean(classCt))
print('Mean of shifted class D = %.2f' % np.mean(classDt))
# `generateSamples` is the function for generating samples WITH replacement.
def generateSamples(sample_data, num_samples=10000, sample_size=100):
'''
It returns a DataFrame where each column is a sample (with replacement).
'''
dfSamples = pd.DataFrame()
for k in range(num_samples):
sample = np.random.choice(sample_data, size=sample_size)
column_name = 'Sample'+str(k)
dfSamples[column_name] = sample
return(dfSamples)
# Let's generate two DataFrames of generated samples with replacement: one for `classCt` and the other for `classDt`
df_C = generateSamples(classCt, num_samples=10000, sample_size=50)
print(df_C.shape)
df_D = generateSamples(classDt, num_samples=10000, sample_size=50)
print(df_D.shape)
# ### Statistic: difference of means
dMeans = np.mean(classC) - np.mean(classD)
sample_distribution_dMeans = df_C.mean() - df_D.mean()
sns.histplot(x=sample_distribution_dMeans).set(title='Sample Distribution - Diff of Means');
def twoSamplesHT(self, stat='mean', alpha=0.05, alternative='two-sided'):
'''
It computes the bootstrap two-samples test.
stat: 'mean', 'median', 't'
alpha: significance level
alternative: one of the three values: 'two-sided', 'smaller', or 'larger'
'''
print(' Sample 1 \t Sample 2')
print('Mean: %.2f \t %.2f' %(np.mean(self.sample1), np.mean(self.sample2)))
print('Var: %.2f \t %.2f' %(np.var(self.sample1), np.var(self.sample2)))
print('n: %i \t %i' %(len(self.sample1), len(self.sample2)))
#sigHo = {'two-sided':' =', 'smaller':'>=', 'larger':'<='}
sigHa = {'two-sided':'!=', 'smaller':'< ', 'larger':'> '}
print('--- Bootstrapping Method ---')
#print(' Ho: mean(sample1) = mean(sample2)')
#print(' Ha: mean(sample1)', sigHa[alternative], 'mean(sample2)')
#self.two.sample_distribution_Ho =
self.two.createSampleDistributionHo(0)
p_val = self.two.getpValue(self.two.obs_value, alpha, alternative)
print(' p-value = '+str(np.round(p_val,4)))
def getpValue(sample_distribution, obs_value, alpha=0.05, alternative='two-sided'):
'''
sample_distribution: the sample distribution
obs_value: observed value
alpha: significance level
alternative: one of the three values: 'two-sided', 'smaller', or 'larger'
'''
ecdf = ECDF(sample_distribution)
if alternative=='two-sided':
if obs_value < np.mean(sample_distribution):
p_val = 2*ecdf(obs_value)
else:
p_val = 2*(1-ecdf(obs_value))
elif alternative=='smaller':
p_val = ecdf(obs_value)
else:
p_val = 1-ecdf(obs_value)
return(p_val)
def hyp_test(sampl_value, sampl_distribution, alpha=0.05, alternative='two-sided'):
'''
sampl_value: observed value calculated from the sample
sampl_distribution: sample distribution calculated from the sample
alpha: significance level
alternative: one of the three values: 'two-sided', 'smaller', and 'larger'
'''
sigHa = {'two-sided':'!=', 'smaller':'< ', 'larger':'> '}
print('--- Bootstrap Hypothesis Test ---')
print(' Sample Mean = %.2f' %(sampl_value))
ax = sns.kdeplot(x=sampl_distribution, color='lightskyblue', shade=True, alpha=0.4)
plt.axvline(x=sampl_value, ymin=0, ymax= 0.03, color='black', linewidth=6)
plt.title('Sampling Distribution')
p_val = getpValue(sampl_distribution, sampl_value, alpha, alternative)
if alternative=='two-sided':
cv1 = np.round(np.percentile(sampl_distribution, (alpha/2)*100),2)
cv2 = np.round(np.percentile(sampl_distribution, 100-alpha*100),2)
plt.axvline(x = cv1, ymin=0, ymax=0.5, color='orangered', linewidth=2)
plt.axvline(x = cv2, ymin=0, ymax=0.5, color='orangered', linewidth=2);
elif alternative=='smaller':
cv1 = np.round(np.percentile(sampl_distribution, alpha*100),2)
plt.axvline(x = cv1, ymin=0, ymax=0.5, color='orangered', linewidth=2)
else:
cv2 = np.round(np.percentile(sampl_distribution, 100-alpha*100),2)
plt.axvline(x = cv2, ymin=0, ymax=0.5, color='orangered', linewidth=2)
print(' p-value = '+str(np.round(p_val,4)))
hyp_test(dMeans, sample_distribution_dMeans)
# ### Statistic: difference of medians
dMed = np.median(classC) - np.median(classD)
sample_distribution_dMed = df_C.median() - df_D.median()
sns.histplot(x=sample_distribution_dMed).set(title='Sample Distribution - Median');
hyp_test(dMed, sample_distribution_dMed)
# ### Statistic: t-test
# The following function calculates the `t-statistic` for the original two samples.
def t(sample1, sample2):
n=len(sample1)
m=len(sample2)
denom=np.sqrt(np.var(sample1)/n+np.var(sample2)/m)
return((np.mean(sample1)-np.mean(sample2))/denom)
t = t(classC, classD)
# Let's define a function for calculating the sample distribution `t` using both DataFrames.
def createSampleDistributionT(df1, df2):
'''
It calculates the t statistic for all
columns in both DataFrames.
'''
n=df1.shape[0]
m=df2.shape[0]
denom=np.sqrt(df1.var()/n + df2.var()/m)
return(list((df1.mean()-df2.mean())/denom))
sample_distribution_t = createSampleDistributionT(df_C, df_D)
sns.histplot(x=sample_distribution_t).set(title='Sample Distribution - t');
hyp_test(t, sample_distribution_t)
# ### Summarizing using classes
# Let's use the previous classes definitions to create another one to summarize the process.
import goliathBootstrap as gb
# `TwoSamplesHT` is the new class for the two-samples hypothesis test using the bootstrapping method.
class TwoSamplesHT():
'''
'''
def __init__(self, sample1, sample2, num_samples=10000, sample_size=100, stat='mean'):
'''
sample1 and sample2 are the two independent samples to be compared
'''
self.stat = stat
self.sample1 = sample1
self.sample2 = sample2
tsample1, tsample2 = self.transformingSamples()
self.tsample1 = gb.goliathBootstrap(tsample1, num_samples, sample_size)
self.tsample2 = gb.goliathBootstrap(tsample2, num_samples, sample_size)
self.two = gb.OneSampleHT([1,2,3], num_samples=5, sample_size=2)
if stat == 'median':
self.diffMedians()
elif stat == 't':
self.T()
else:
self.diffMeans()
def transformingSamples(self):
'''
It transforms sample1 and sample2 into two new samples tsample1
and tsample2, that share their mean.
'''
# Calculating combined_mean
combined_sample = np.concatenate((self.sample1, self.sample2))
combined_mean = np.mean(combined_sample)
tsample1 = self.sample1 - np.mean(self.sample1) + combined_mean
tsample2 = self.sample2 - np.mean(self.sample2) + combined_mean
return(tsample1, tsample2)
def graphSampleDistribution(self):
'''
'''
self.two.graphSampleDistribution()
def diffMeans(self):
'''
statistic: difference of means
Use the difference of means as statistic for hypothesis testing.
The sample distribution is computed using the difference of means.
'''
self.tsample1.createSampleDistribution(func=np.mean)
self.tsample2.createSampleDistribution(func=np.mean)
sample_distribution = self.tsample1.sample_distribution - self.tsample2.sample_distribution
self.two.setSampleDistribution(sample_distribution, func=np.mean)
self.two.obs_value = np.mean(self.sample1) - np.mean(self.sample2)
def diffMedians(self):
'''
statistic: difference of medians
Use the difference of medians as statistic for hypothesis testing.
The sample distribution is computed using the difference of medians.
'''
self.tsample1.createSampleDistribution(func=np.median)
self.tsample2.createSampleDistribution(func=np.median)
sample_distribution = self.tsample1.sample_distribution - self.tsample2.sample_distribution
self.two.setSampleDistribution(sample_distribution, func=np.median)
self.two.obs_value = np.median(self.sample1) - np.median(self.sample2)
def T(self):
'''
statistic: t-statistic
Use the t-statistic for hypothesis testing.
The sample distribution is computed using the t-statistic.
'''
self.tsample1.createSampleDistribution(func=np.var)
samp_dist_var1 = self.tsample1.sample_distribution
self.tsample1.createSampleDistribution(func=np.mean)
samp_dist_mean1 = self.tsample1.sample_distribution
samp_dist_n1 = self.tsample1.sample_size
self.tsample2.createSampleDistribution(func=np.var)
samp_dist_var2 = self.tsample2.sample_distribution
self.tsample2.createSampleDistribution(func=np.mean)
samp_dist_mean2 = self.tsample2.sample_distribution
samp_dist_n2 = self.tsample2.sample_size
denom = np.sqrt(samp_dist_var1/samp_dist_n1 + samp_dist_var2/samp_dist_n2)
sample_distribution = (samp_dist_mean1 - samp_dist_mean2)/denom
self.two.setSampleDistribution(sample_distribution, func=np.mean)
# Calculating obs_value
n=len(self.sample1)
m=len(self.sample2)
denom=np.sqrt(np.var(self.sample1)/n + np.var(self.sample2)/m)
self.two.obs_value = (np.mean(self.sample1)-np.mean(self.sample2))/denom
print(self.two.obs_value)
def twoSamplesHT(self, stat='mean', alpha=0.05, alternative='two-sided'):
'''
It computes the bootstrap two-samples test.
stat: 'mean', 'median', 't'
alpha: significance level
alternative: one of the three values: 'two-sided', 'smaller', or 'larger'
'''
print(' Sample 1 \t Sample 2')
print('Mean: %.2f \t %.2f' %(np.mean(self.sample1), np.mean(self.sample2)))
print('Var: %.2f \t %.2f' %(np.var(self.sample1), np.var(self.sample2)))
print('n: %i \t %i' %(len(self.sample1), len(self.sample2)))
#sigHo = {'two-sided':' =', 'smaller':'>=', 'larger':'<='}
sigHa = {'two-sided':'!=', 'smaller':'< ', 'larger':'> '}
print('--- Bootstrapping Method ---')
#print(' Ho: mean(sample1) = mean(sample2)')
#print(' Ha: mean(sample1)', sigHa[alternative], 'mean(sample2)')
#self.two.sample_distribution_Ho =
self.two.createSampleDistributionHo(0)
p_val = self.two.getpValue(self.two.obs_value, alpha, alternative)
print(' p-value = '+str(np.round(p_val,4)))
def graphTwoSamplesHT(self, stat='mean', alpha=0.05, alternative='two-sided'):
'''
It computes the bootstrap one-sample test and gets graphical results.
stat: 'mean', 'median', 't'
alpha: significance level
alternative: one of the three values: 'two-sided', 'smaller', or 'larger'
'''
print(' Sample 1 \t Sample 2')
print('Mean: %.2f \t %.2f' %(np.mean(self.sample1), np.mean(self.sample2)))
print('Var: %.2f \t %.2f' %(np.var(self.sample1), np.var(self.sample2)))
print('n: %i \t %i' %(len(self.sample1), len(self.sample2)))
#sigHo = {'two-sided':' =', 'smaller':'>=', 'larger':'<='}
sigHa = {'two-sided':'!=', 'smaller':'< ', 'larger':'> '}
print('--- Bootstrapping Method ---')
self.two.createSampleDistributionHo(0)
p_val = self.two.graphpValue(self.two.obs_value, alpha, alternative)
print(' p-value = '+str(np.round(p_val,4)))
My2S = TwoSamplesHT(classC, classD)
My2S.twoSamplesHT()
My2S.graphTwoSamplesHT()
My2S = TwoSamplesHT(classC, classD, stat='median')
My2Smed = TwoSamplesHT(classC, classD, stat='median')
My2Smed.graphTwoSamplesHT()
My2St = TwoSamplesHT(classC, classD, stat='t')
My2St.graphTwoSamplesHT()
np.random.seed(123)
classE = np.random.normal(89, 4, 90)
classD = np.random.normal(90, 5, 95)
My2S = TwoSamplesHT(classE, classD)
My2S.graphTwoSamplesHT()
My2S.graphSampleDistribution()
My2Smed = TwoSamplesHT(classC, classD, stat='median')
My2Smed.graphSampleDistribution()
My2Smed = TwoSamplesHT(classC, classD, stat='mean')
My2Smed.graphSampleDistribution()
| GoliathResearch/Boot_2_Samp_Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import datetime as datetime
keys = ['8to9', '9to10', '10to11', '11to12','12to13','13to14','14to15','15to16','16to17','17to18','18to19','19to20']
# +
# read data
contingent_ema = pd.read_csv('eventcontingent-ema.csv')
eod_ema = pd.read_csv('eod-ema.csv')
contingent_ema_alternative = pd.read_csv('eventcontingent-ema-alternative.csv')
eod_ema_alternative = pd.read_csv('eod-ema-alternative.csv')
contingent_ema_backup = pd.read_csv('eventcontingent-ema-backup.csv')
eod_ema_backup = pd.read_csv('eod-ema-backup.csv')
# -
def contingent_vs_eod(contingent_ema, eod_ema, alt_date=False):
'''
checks how reliable eod_ema is in tracking smoking events recorded in contingent_ema
'''
days_smoked = {}
# store events in eod_ema
for index, row in eod_ema.iterrows():
if row['status'] == "MISSED":
continue
for i in keys:
if row[i] == 1:
try:
time = datetime.datetime.strptime(row['date'], '%m/%d/%Y %H:%M')
except:
time = datetime.datetime.strptime(row['date'], '%Y-%m-%d %H:%M:%S')
date = (time.year, time.month, time.day)
if time.hour == 0 or time.hour == 1:
date = (time.year, time.month, time.day-1)
if row['participant_id'] not in days_smoked:
days_smoked[row['participant_id']] = set()
days_smoked[row['participant_id']].add(date)
break
count = 0
missing = 0
missing_days = 0
lst = []
events = set()
# store contingent events
for index, row in contingent_ema.iterrows():
count += 1
try:
time = datetime.datetime.strptime(row['date'], '%m/%d/%y %H:%M')
except:
time = datetime.datetime.strptime(row['date'], '%Y-%m-%d %H:%M:%S')
date = (time.year, time.month, time.day)
if row['participant_id'] not in days_smoked:
continue
if date not in days_smoked[row['participant_id']]:
missing += 1
if date not in events:
missing_days += 1
events.add(date)
lst.append(index)
missing_in_eod = contingent_ema.ix[lst]
# percentage of contingent events missed by eod_ema
print("Contingent vs. EOD EMA inconsistency percentage by entries is: ", missing/float(count))
print("Contingent vs. EOD EMA inconsistency percentage by day is: ", missing_days/float(count))
return missing_in_eod
contingent_vs_eod(contingent_ema, eod_ema)
contingent_vs_eod(contingent_ema_alternative, eod_ema_alternative)
contingent_vs_eod(contingent_ema_backup, eod_ema_backup)
| exploratory_data_analysis/.ipynb_checkpoints/measurements_contingent_eod-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
% pylab inline
from __future__ import print_function
import os.path
import pandas
import src
import sklearn
import os
import scipy
import scipy.stats
import csv
# +
def fake(*args, **kwargs):
print('Fake called with', str(args), str(kwargs))
sys.exit(1)
# fake out the create_model so we don't accidentally attempt to create data
src.common.create_model = fake
# -
print(os.getcwd())
if os.getcwd().endswith('notebooks'):
os.chdir('..')
print(os.getcwd())
# +
args = dict(level='file', force=False, model='lda', source=['release', 'changeset', 'temporal'], random_seed_value=1)
model_config, model_config_string = src.main.get_default_model_config(args)
args.update({'model_config': model_config, 'model_config_string': model_config_string})
changeset_config, changeset_config_string = src.main.get_default_changeset_config()
args.update({'changeset_config': changeset_config, 'changeset_config_string': changeset_config_string})
projects = src.common.load_projects(args)
projects
# -
import dulwich
import dulwich.patch
import io
from StringIO import StringIO
# +
def get_diff(repo, changeset):
""" Return a text representing a `git diff` for the files in the
changeset.
"""
patch_file = StringIO()
dulwich.patch.write_object_diff(patch_file,
repo.object_store,
changeset.old, changeset.new)
return patch_file.getvalue()
def walk_changes(repo):
""" Returns one file change at a time, not the entire diff.
"""
for walk_entry in repo.get_walker(reverse=True):
commit = walk_entry.commit
for change in get_changes(repo, commit):
yield change
def get_changes(repo, commit):
# initial revision, has no parent
if len(commit.parents) == 0:
for changes in dulwich.diff_tree.tree_changes(
repo.object_store, None, commit.tree
):
diff = get_diff(repo, changes)
yield commit, None, diff, get_path(changes)
for parent in commit.parents:
# do I need to know the parent id?
try:
for changes in dulwich.diff_tree.tree_changes(
repo.object_store, repo[parent].tree, commit.tree, want_unchanged=False
):
diff = get_diff(repo, changes)
yield commit, parent, diff, get_path(changes)
except KeyError as e:
print("skipping commit:", commit, ", parent:", parent, ", exception:", e)
def get_path(change):
path = '/dev/null'
if change.old.path and change.new.path:
path = change.new.path
elif change.old.path:
path = change.old.path
elif change.new.path:
path = change.new.path
return path
# -
import re
unified = re.compile(r'^[+ -].*')
context = re.compile(r'^ .*')
addition = re.compile(r'^\+.*')
removal = re.compile(r'^-.*')
# +
def get_commit_info(project):
commit2linecount = dict()
for commit, p, d, fname in walk_changes(src.common.load_repos(project)[0]):
diff_lines = filter(lambda x: unified.match(x),
d.splitlines())
if len(diff_lines) < 2:
# useful for not worrying with binary files
a, r, c = 0, 0, 0
else:
# # sanity?
assert diff_lines[0].startswith('--- '), diff_lines[0]
assert diff_lines[1].startswith('+++ '), diff_lines[1]
# parent_fn = diff_lines[0][4:]
# commit_fn = diff_lines[1][4:]
lines = diff_lines[2:] # chop off file names hashtag rebel
a = len(filter(lambda x: addition.match(x), lines))
r = len(filter(lambda x: removal.match(x), lines))
c = len(filter(lambda x: context.match(x), lines))
m = len(commit.message.splitlines())
if commit.id not in commit2linecount:
commit2linecount[commit.id] = dict()
commit2linecount[commit.id][fname] = (a, r, c, m)
return commit2linecount
# +
def get_commit_info1(repo, commit):
commit2linecount = dict()
for commit, p, d, fname in get_changes(repo, commit):
diff_lines = filter(lambda x: unified.match(x),
d.splitlines())
if len(diff_lines) < 2:
# useful for not worrying with binary files
a, r, c = 0, 0, 0
else:
# # sanity?
assert diff_lines[0].startswith('--- '), diff_lines[0]
assert diff_lines[1].startswith('+++ '), diff_lines[1]
# parent_fn = diff_lines[0][4:]
# commit_fn = diff_lines[1][4:]
lines = diff_lines[2:] # chop off file names hashtag rebel
a = len(filter(lambda x: addition.match(x), lines))
r = len(filter(lambda x: removal.match(x), lines))
c = len(filter(lambda x: context.match(x), lines))
m = len(commit.message.splitlines())
commit2linecount[fname] = (a, r, c, m)
return commit2linecount
# -
for project in projects:
ids = src.common.load_ids(project)
issue2git, git2issue = src.common.load_issue2git(project, ids, filter_ids=True)
goldset = src.goldsets.load_goldset(project)
#commit2linecount = get_commit_info(project)
repo = src.common.load_repos(project)[0]
with open(os.path.join(project.full_path, 'changes-file-goldset.csv'), 'w') as f:
w = csv.writer(f)
w.writerow(['sha', 'issues', 'change_type', 'name', 'additions', 'removals', 'context', 'message'])
for sha, changes in goldset.items():
commit, change_list = changes
info = get_commit_info1(repo, commit)
for change_type, name in change_list:
if sha in git2issue:
issues = set(git2issue[sha])
w.writerow([sha, ';'.join(issues), change_type, name] + list(info[name]))
issue2git, git2issue = src.common.load_issue2git(project, ids, filter_ids=False)
with open(os.path.join(project.data_path, 'changes-file-issues.csv'), 'w') as f:
w = csv.writer(f)
w.writerow(['sha', 'issues', 'change_type', 'name', 'additions', 'removals', 'context', 'message'])
for sha, changes in goldset.items():
commit, change_list = changes
info = get_commit_info1(repo, commit)
for change_type, name in change_list:
if sha in git2issue:
issues = set(git2issue[sha])
w.writerow([sha, ';'.join(issues), change_type, name]+ list(info[name]))
with open(os.path.join(project.data_path, 'changes-file-full.csv'), 'w') as f:
w = csv.writer(f)
w.writerow(['sha', 'change_type', 'name', 'additions', 'removals', 'context', 'message'])
for sha, changes in goldset.items():
commit, change_list = changes
info = get_commit_info1(repo, commit)
for change_type, name in change_list:
try:
w.writerow([sha, change_type, name] + list(info[name]))
except KeyError as e:
print("skipping commit:", commit, ", name:", name, ", exception:", e)
sha, project
commit
name
changes
info
get_commit_info1(repo, commit)
| notebooks/goldset as csv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Workshop Tutorial 1: Practical Apricot Usage
#
# Welcome to a tutorial on how to reduce redundancy in massive data sets using submodular optimization! In this tutorial, we will explore submodular optimization at a high level and see how it can be used to select representative subsets of data; these subsets can then be used on their own, such as to create a highlight reel for an album, or to create smaller training sets for machine learning models that achieve similar performance in a fraction of the time. Although submodular optimization is as general-purpose as convex optimization, this tutorial will focus on using basic optimization algorithms on two main functions: a feature-based function, and facility location functions. Finally, this tutorial will focus on practical usage of apricot. Please see the other tutorials for more of the theory behind how these functions work.
#
# Let's get started!
# +
# %pylab inline
import seaborn
seaborn.set_style('whitegrid')
from tqdm import tqdm
# -
# ## Feature-based Selection
#
# A simple class of submodular functions are the feature-based ones. At a high level, feature-based functions are those that maximize diversity in the observed feature values themselves. This property means that they work well in settings where each feature represents some quality of the data and higher values mean that the example has more of that value: for instance, when vectorizing text data, each feature might represent a word and the value would be the number of times that the word appears in the document.
#
# More formally, feature-based functions take the form
#
# \begin{equation}
# f(X) = \sum\limits_{u \in U} w_{u} \phi_{u} \left( \sum\limits_{x \in X} m_{u}(x) \right)
# \end{equation}
# where $x$ is a single example, $X$ is the set of all examples, $u$ is a single feature, $U$ is the set of all features, $w$ is a weight foe each feature, and $\phi$ is a saturating concave function such as log or sqrt.
# ### 20 newsgroups
#
# Let's start off with some text data. Below, some code is provided to download a shuffled version of the 20 newsgroups data set, which contains articles and labels for 20 topics. However, as we can see, the downloaded text is not in a convenient featurized form that can be used by machine learning models.
# +
from sklearn.datasets import fetch_20newsgroups
train_data = fetch_20newsgroups(subset='train', random_state=0, shuffle=True)
train_data.data[2]
# -
# Processing this to get rid of the weird characters like "\n" and converting it to a vectorized form is not really the point of this tutorial, so let's use sklearn's built-in vectorizer to get a clean feature matrix to operate on. Please fill in the below cells.
# +
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = ...
X_train = ...
X_train.shape
# -
# Let's take a look at how dense that data is. We can do this by creating a heatmap where each red dot represents that the feature hasa non-zero value. If you implemented the above code correctly you should get a density of 0.08395.
# +
i = 1000
X_random_block = X_train[:i].toarray()
random_density = (X_random_block != 0).mean()
plt.figure(figsize=(12, 6))
plt.scatter(*numpy.where(X_random_block.T[:i] != 0), c='r', s=0.05)
plt.xlim(0, i)
plt.ylim(0, i)
plt.title("Words in Text Blobs: Density={:4.4}".format(random_density), fontsize=14)
plt.xlabel("Word Index", fontsize=12)
plt.ylabel("Text Blob Index", fontsize=12)
plt.show()
# -
# The above heat map is made up of the first 1000 entries in the data set after shuffling. It doesn't seem particularly dense; fewer than 10% of the values in the matrix are positive. This may not be particularly problematic when restricting to 1000 features, but since more features generally means more accuracy in this setting, is there a way to ensure that our subset sees a higher percentage of the features?
#
# Well, choosing examples that exhibit values in a diverse set of features is exactly what submodular optimization and feature based functions are good at. We can define a feature-based function easily using apricot, choose an equal sized subset of examples using submodular optimization, and re-visualize the chosen examples.
#
# Fill in the next code block, using a feature-based selector to choose 1000 samples with everything else set to the default parameters. You can also set `verbose=True` to see a progress bar during selection. Note that, while apricot can operate on sparse matrices, you might need to use the `toarray()` method to convert a sparse array to a dense array for the subsequent visualization step.
# +
from apricot import FeatureBasedSelection
selector = ...
X_submodular_block = ...
# -
# Now that you've selected the examples, we can visualize the block in the same way that we visualized the randomly selected examples. If you implemented the selector correctly, you should get a density of 0.2103. Visually, the heatmap should also look significantly more red. This is because we are intentionally choosing examples that have many non-zero values, i.e., rows that would have red in a lot of columns.
# +
submodular_density = (X_submodular_block != 0).mean()
plt.figure(figsize=(12, 6))
plt.scatter(*numpy.where(X_submodular_block.T[:i] != 0), c='r', s=0.05)
plt.xlim(0, i)
plt.ylim(0, i)
plt.title("Words in Text Blobs: Density={:4.4}".format(submodular_density), fontsize=14)
plt.xlabel("Word Index", fontsize=12)
plt.ylabel("Text Blob Index", fontsize=12)
plt.show()
# -
# Next, we can take a look at the number of words that are observed at least once as we select more and more examples, either randomly, or using submodular optimization. If your implementation of selecting a subset of examples using apricot is correct you should see that a larger number of words are observed earlier in the selection process when submodular optimization is used. You do not need to do anything here.
# +
random_seen_words = (numpy.cumsum(X_random_block, axis=0) > 0).sum(axis=1)
submodular_seen_words = (numpy.cumsum(X_submodular_block, axis=0) > 0).sum(axis=1)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("# Total Uniques Words Seen", fontsize=14)
plt.plot(random_seen_words, color='0.5', linewidth=2, label="Random")
plt.plot(submodular_seen_words, color='#FF6600', linewidth=2, label="Submodular Optimization")
plt.xlabel("# Examples Chosen", fontsize=12)
plt.ylabel("# Words Seen At Least Once", fontsize=12)
plt.legend(fontsize=12)
plt.subplot(122)
plt.title("# New Words Seen per Example", fontsize=14)
plt.plot(numpy.diff(random_seen_words), color='0.5', linewidth=2, label="Random")
plt.plot(numpy.diff(submodular_seen_words), color='#FF6600', linewidth=2, label="Submodular Optimization")
plt.xlabel("# Examples Chosen", fontsize=12)
plt.ylabel("# New Words in Example", fontsize=12)
plt.legend(fontsize=12)
plt.tight_layout()
plt.show()
# -
# Next, we're going to move on to the primary goal of apricot: choosing subsets for training machine learning models. Unfortunately, this is not always straightforward. As an example, we are going to consider classifying a subset of classes from the 20 newsgroups data set. Here are the classes.
fetch_20newsgroups(subset="train").target_names
# As an initial example, we will focus on two classes that are somewhat related conceptually but will likely have distinct vocabularies. We will use the TF-IDF vectorizer instead of the count vectorizer because TF-IDF is a straightforward way to downweight words that appear in many articles and to upweight words that are somewhat rare and more likely to be topic-specific. Please fill in the below code involving processing the training and test data properly.
# +
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import SGDClassifier
categories = ['sci.med', 'sci.space']
train_data = fetch_20newsgroups(subset='train', categories=categories, random_state=0, shuffle=True)
test_data = fetch_20newsgroups(subset='test', categories=categories, random_state=0)
vectorizer = ...
X_train = ...
X_test = ...
y_train = ...
y_test = ...
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# -
# Next, use a feature-based function to select 1000 examples from the training data. 1000 examples is almost all of the data, but because the selection process is greedy we can use it to rank most of the data and then choose increasingly large subsets to train the model.
selector = ...
# Now, let's train a SGG classifier on subsets of increasing size and compare to ten draws of similarly sized random subsets. Please fill in the below code, keeping in mind that the `selector.ranking` attribute contains a ranking of indices from the original data set. For example, if the first element was `10`, that would mean that `X_train[10]` was the first element chosen by the greedy optimization process.
# +
model = SGDClassifier(random_state=0)
counts = numpy.arange(10, 1001, 10)
random_idxs = numpy.array([numpy.random.choice(X_train.shape[0], replace=False, size=1000) for i in range(10)])
random_accuracies, submodular_accuracies = [], []
for count in tqdm(counts):
idxs = selector.ranking[:count]
...
y_hat = model.predict(X_test)
acc = (y_hat == y_test).mean()
submodular_accuracies.append(acc)
accs = []
for i in range(10):
r_idxs = random_idxs[i, :count]
...
y_hat = model.predict(X_test)
acc = (y_hat == y_test).mean()
accs.append(acc)
random_accuracies.append(accs)
plt.title("20 Newsgroups Classification", fontsize=14)
plt.plot(counts, numpy.mean(random_accuracies, axis=1), color='0.5', linewidth=2, label="Random")
plt.plot(counts, submodular_accuracies, color='#FF6600', linewidth=2, label="Submodular Optimization")
plt.xlabel("# Chosen Examples", fontsize=12)
plt.ylabel("Classification Accuracy")
plt.legend(loc=4, fontsize=12)
seaborn.despine()
plt.tight_layout()
plt.show()
# -
# Looks like we can get almost the same performance with just 100 examples (~93% with submodular optimization and ~85% with random selection) as we could with the full set of 1000 examples. It is worth noting that there is a lot of variance when the number of examples chosen is very small, but that performance picks up pretty quickly. If you're not seeing these trends, it's possible that you implemented something incorrectly.
#
# If you'd like to explore apricot's abilities more broadly, try out the above cells using different sets of categories from the 20 newsgroups corpus and different types of classifiers.
# ### A Warning: Gaussian Blobs
#
# Unfortunately, not all data is amenable to feature-based functions. Specifically, data sets where the features don't follow the semantics that are assumed, i.e., non-negative and a higher value conveys some notion of having "more" of some feature. If you have features like coordinates or embeddings from a pre-trained model or projections from a method like tSNE or UMAP, they may not work as you'd like.
#
# Here, we will look at using data drawn from random Gaussian blobs.
# +
from sklearn.datasets import make_blobs
numpy.random.seed(0)
centers = numpy.random.normal(100, 5, (5, 2))
X, y = make_blobs(n_samples=2500, n_features=2, centers=centers, random_state=0)
# -
# This time, we will fill in how to use a selector for you.
selector = FeatureBasedSelection(n_samples=100)
selector.fit(X)
# Now, let's apply the selector and get our representative subset!
# +
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("Gaussian Blob Data", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.axis('off')
plt.subplot(122)
plt.title("Feature Based Selected Examples", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.scatter(*X[selector.ranking].T, color='#FF6600', s=10, label="Selected Examples")
plt.axis('off')
plt.legend(loc=(1.01, 0.5), fontsize=12)
plt.tight_layout()
plt.show()
# -
# Oops. That doesn't look like a representative subset.
#
# Does this mean that feature-based functions cannot work in settings where the data doesn't have the same semantics as our assumptions? No! We just need to engineer features that do follow those semantics.
# ### Feature Engineering for Feature-based Functions: Gaussian Mixture Models
#
# Potentially, one of the most straightforward ways to transform this Gaussian data would be to, first, apply a Gaussian mixture model to it, and second, use the posterior probabilities from that model as the features. Basically, instead of applying submodular optimization to the original feature values themselves, we apply them to the predicted class probabilities from the mixture model. These probabilities have all the properties that we would like: (1) because they are between zero and one they must be non-negative, (2) a higher value means an enrichment for that feature, i.e., a higher probability means an enrichment for class membership from that class.
#
# Using the `GaussianMixture` object below, transform the above data from in `X` from the original feature values into the posterior probabilities. Because the data was generated from five clusters, your mixture should have five components. If done correctly, the resulting shape should be `(2500, 5)`.
# +
from sklearn.mixture import GaussianMixture
model = ...
X_posteriors = ...
X_posteriors.shape
# -
# Now, apply a feature-based selector as you've done in the past.
selector = FeatureBasedSelection(n_samples=100)
selector.fit(...)
# Now, let's plot the mixture centroids as well as the selected examples.
# +
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("Gaussian Blob Data", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.scatter(*model.means_.T, color='b', s=10)
plt.axis('off')
plt.subplot(122)
plt.title("Feature Based Selected Examples", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.scatter(*model.means_.T, color='b', s=10, label="GMM Centroids")
plt.scatter(*X[selector.ranking].T, color='#FF6600', s=10, label="Selected Examples")
plt.axis('off')
plt.legend(loc=(1.01, 0.5), fontsize=12)
plt.tight_layout()
plt.show()
# -
# Does this look like what you might expect?
#
# If not, think more closely about the feature-based function and the data here. The sum of each example should be equal to one, so there are no examples that have a higher coverage of the feature space than other examples. However, the feature-based function includes a saturation function that diminishes the benefit of high values in one feature versus spreading them out across several features. Combined, these facts mean that the method will always try to choose examples that are split between multiple classes. Put another way, `numpy.sqrt([0.5, 0.5]).sum() = 1.414` is larger than `numpy.sqrt([1.0, 0]).sum() = 1.0`.
#
# Regardless of the explanation, this isn't exactly what we were expecting. What we'd like to do is have a way that our feature-based function can select examples near the middle of each cluster without needing cluster labels. The problem with using the posteriors, which are normalized to sum to 1, is that examples that are purest for a particular cluster are not the ones closest to the centroid but rather the ones that are on the other side of the centroid as all the other centroids.
#
# What does that mean? Well, let's use a simple trick to try to pick out the purest examples from each cluster. First, we need to transform these values such that values near one become bigger, so that purity is valued higher, but values near zero remain the same. We can use an `arctanh` function for that, but you should try out any other function you'd like to see the effects. Below is an example `arctanh` function.
x = numpy.arange(0, 1, 0.001)
plt.plot(x, numpy.arctanh(x))
plt.title("Example Non-Linear Function", fontsize=14)
plt.xlabel("x", fontsize=12)
plt.ylabel("arctanh(x)", fontsize=12)
plt.show()
# +
X_arctanh = numpy.arctanh(X_posteriors - 1e-12) + 1e-12
selector = FeatureBasedSelection(n_samples=100)
selector.fit(X_arctanh)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("Gaussian Blob Data", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.scatter(*model.means_.T, color='b', s=10)
plt.axis('off')
plt.subplot(122)
plt.title("Feature Based Selected Examples", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.scatter(*model.means_.T, color='b', s=10, label="GMM Centroids")
plt.scatter(*X[selector.ranking].T, color='#FF6600', s=10, label="Selected Examples")
plt.axis('off')
plt.legend(loc=(1.01, 0.5), fontsize=12)
plt.tight_layout()
plt.show()
# -
# We can see some interesting trends here. Unlike the previous plot where all the chosen examples were near boundaries, most of the chosen examples are on very edge of the convex hull. A notable exception, however, is the top cluster. This is likely because the top cluster is so far away from the others that any example in it is considered "pure."
#
# Finally, let's get to the expected behavior. We would like to design a transformation such that our selection chooses elements that are neat representations of each cluster individually. We saw previously that using the normalized posterior probabilities can be an issue because the normalization process encourages the chosen examples to be far away from the other centroids, rather than close to any particular centroid. If we get rid of that normalization process and instead use the raw probabilities that each example belongs to a particular mixture component, we can get around this.
#
# In the cell below, use the `multivariate_normal` method from scipy to calculate an array of probabilities for each example under each mixture component. Hint: you will need to do this separately for each component as part of a loop. Make sure that your final output is of shape `(n_samples, 5)`.
# +
from scipy.stats import multivariate_normal
X_probs = ...
selector = FeatureBasedSelection(n_samples=100)
selector.fit(X_probs)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("Gaussian Blob Data", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.scatter(*model.means_.T, color='b', s=10)
plt.axis('off')
plt.subplot(122)
plt.title("Feature Based Selected Examples", fontsize=14)
plt.scatter(*X.T, color='0.5', s=10)
plt.scatter(*model.means_.T, color='b', s=10, label="GMM Centroids")
plt.scatter(*X[selector.ranking].T, color='#FF6600', s=10, label="Selected Examples")
plt.axis('off')
plt.legend(loc=(1.01, 0.5), fontsize=12)
plt.tight_layout()
plt.show()
# -
# If you've done this correctly, you'll notice that all of the chosen examples are near the centroids of the clusters.
#
# At this point, you might be wondering "why do I need submodular optimization to do this?" because you can just take the examples closest to centroids. The answer is two-fold: first, submodular optimization can be applied to any type of transformation where it may not be obvious how to do it by hand. Second, submodular optimization automatically balances the number of examples chosen per centroid based on their distance. This isn't a particularly complicated task here where all of the clusters are distinct, but consider this example:
# +
numpy.random.seed(0)
centers = numpy.random.normal(100, 4, (5, 2))
X2, _ = make_blobs(n_samples=1000, n_features=2, centers=centers, cluster_std=3, random_state=0)
model2 = GaussianMixture(5, random_state=0).fit(X2)
X_probs = numpy.array([multivariate_normal.pdf(X2, model2.means_[i], model2.covariances_[i]) for i in range(5)]).T
selector = FeatureBasedSelection(n_samples=100)
selector.fit(X_probs)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("Gaussian Blob Data", fontsize=14)
plt.scatter(*X2.T, color='0.5', s=10)
plt.scatter(*model2.means_.T, color='b', s=10)
plt.axis('off')
plt.subplot(122)
plt.title("Feature Based Selected Examples", fontsize=14)
plt.scatter(*X2.T, color='0.5', s=10)
plt.scatter(*model2.means_.T, color='b', s=10, label="GMM Centroids")
plt.scatter(*X2[selector.ranking].T, color='#FF6600', s=10, label="Selected Examples")
plt.axis('off')
plt.legend(loc=(1.01, 0.5), fontsize=12)
plt.tight_layout()
plt.show()
# -
# Here, the selected examples are close to one of the centroids, which is arguably in low-data areas and probably have a smaller variance. Because the other four centroids are in data-richer areas their variances likely overlap significantly, and so the chosen examples are in the central region between the three of them. Simply choosing the points near the centroid would not give the same results. This isn't to say that this is always exactly the most representative set from this data, just that this is a case where submodular optimization will provide different results from a simpler approach.
# ## Facility Location Selection
#
# An alternative to feature-based functions are graph-based functions. These functions operate on a similarity matrix (note: a similarity matrix is the inverse of a distance matrix, where the most similar elements in a distance matrix have a pairwise value of zero whereas the most distant elements in a similarity matrix have a pairwise value of zero) instead of the feature values directly. Graph-based functions are generally more versatile than feature-based ones because any featurization of data can be converted into a similarity graph by calculating the Euclidean distance or correlation between examples, but data types that are inherently graphs can also be operated on.
#
# A specific graph-based function is the facility location function, which has been used in the past to literally locate new facilities. In this setting, one wants to identify the next location that would serve the most people that are currently underserved without having to move any of the previous locations. The facility location function takes the following form:
#
# \begin{equation}
# f(X, V) = \sum\limits_{v \in V} \max\limits_{x \in X} \phi(x, v)
# \end{equation}
# where $x$ is a selected example $X$ is the set of already selected examples, $v$ is an unselected example, $V$ is the set of unselected examples, and $\phi$ is a similarity function that either returns an entry in a pre-defined similarity matrix or calculates the similarity between two examples.
#
# A challenge with using graph-based functions is that the similarity matrix has to be calculated and stored in memory for efficient computation, which can be challenging for massive data sets. However, it is more versatile because similarities can be calculated that are more informative than simple featurizations.
# ### Greedy Version of K-means Clustering
#
# A simple way to think about using submodular optimization to optimize a facility location function is that it is a greedy version of k-medoids clustering. As a refresher, k-medoids clustering is similar to k-means except that the cluster centroids must be examples from the training set. It is similar to the difference between calculating the mean and calculating the median. Submodular optimization on a facility location function involves iteratively choosing the example that best explains the previously explained examples, i.e., that maximizes the increase in similarity between all of the examples and all of the chosen examples.
#
# What does that look like in practice? Implement a facility location selection object to choose 50 examples. You'll notice that, despite being a graph-based function, you can still pass in a feature matrix and it will automatically calculate a similarity graph from that.
# +
from apricot import FacilityLocationSelection
selector = ...
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.title("Gaussian Blob Data", fontsize=14)
plt.scatter(*X.T, color='0.7', s=10)
plt.scatter(*model.means_.T, color='b', s=10)
plt.axis('off')
plt.subplot(122)
plt.title("Facility Location Selected Examples", fontsize=14)
plt.scatter(*X.T, color='0.7', s=10)
plt.scatter(*model.means_.T, color='b', s=10, label="GMM Centroids")
plt.scatter(*X[selector.ranking].T, color='#FF6600', s=10, label="Selected Examples")
plt.axis('off')
plt.legend(loc=(1.01, 0.5), fontsize=12)
plt.tight_layout()
plt.show()
# -
# The selected examples should appear to be fairly uniformly distributed across the space. If you're noticing a concentration of points anywhere, you may have incorrectly implemented something.
#
# To get a sense for the selection process, let's visualize the iterative process of selecting examples.
# +
plt.figure(figsize=(14, 8))
for i in range(10):
plt.subplot(2, 5, i+1)
plt.title("{} Selections".format(i+1), fontsize=14)
plt.scatter(*X.T, color='0.7', s=10)
if i > 0:
plt.scatter(*X[selector.ranking[:i]].T, color='0.1', s=10, label="Selected Examples")
plt.scatter(*X[selector.ranking[i]].T, color='#FF6600', s=10, label="Next Selection")
plt.axis('off')
plt.legend(loc=(1.01, 0.5), fontsize=12)
plt.tight_layout()
plt.show()
# -
# You'll notice that the first example comes from around the center of the data set. As a greedy approach, the optimizer is trying to find the single best example without knowing if it will be able to choose future ones. Then, the second example comes from an underrepresented area, etc.
# ### Digits Data Set
#
# Now, let's apply facility location functions to choosing data for machine learning. A constraint of the feature-based functions is that they only work when the features follow a particular set of semantics. Although there are powerful approaches for transforming features into new features that follow those semantics, it's also nice to not have to do anything fancy to get a good set of items. A good example of data where the assumptions of feature-based functions don't work out of the box are those that involve images.
#
# Let's download a reduced version of the digits data set and try training a machine learning model using selected subsets or random subsets, as we did before.
# +
from sklearn.datasets import load_digits
random.seed(0)
X, y = load_digits(return_X_y=True)
idxs = numpy.arange(X.shape[0])
numpy.random.shuffle(idxs)
X = X[idxs]
y = y[idxs]
X_train, y_train = X[:1000], y[:1000]
X_test, y_test = X[1000:], y[1000:]
# -
# Now, impelment a facility location function to choose 1000 examples and a feature-based function to also choose 1000 examples, for comparison.
# +
fl_selector = ...
fb_selector = ...
# -
# Let's see how the subsets selected using facility location fare against those selected using random selection or feature-based selection.
# +
numpy.random.seed(0)
model = SGDClassifier(random_state=0)
counts = numpy.arange(10, 1001, 10)
random_idxs = numpy.array([numpy.random.choice(X_train.shape[0], replace=False, size=1000) for i in range(10)])
random_accuracies, fl_accuracies, fb_accuracies = [], [], []
for count in tqdm(counts):
#
idxs = ...
y_hat = model.predict(X_test)
acc = (y_hat == y_test).mean()
fl_accuracies.append(acc)
#
idxs = ...
y_hat = model.predict(X_test)
acc = (y_hat == y_test).mean()
fb_accuracies.append(acc)
accs = []
for i in range(10):
r_idxs = random_idxs[i, :count]
...
y_hat = model.predict(X_test)
acc = (y_hat == y_test).mean()
accs.append(acc)
random_accuracies.append(accs)
# +
plt.title("Reduced MNIST Classification", fontsize=14)
plt.plot(counts, numpy.mean(random_accuracies, axis=1), color='0.5', linewidth=2, label="Random")
plt.plot(counts, fl_accuracies, color='#FF6600', linewidth=2, label="Facility Location Optimization")
plt.plot(counts, fb_accuracies, color='g', linewidth=2, label="Feature-Based Optimization")
plt.xlabel("# Chosen Examples", fontsize=12)
plt.ylabel("Classification Accuracy")
plt.legend(loc=4, fontsize=12)
seaborn.despine()
plt.tight_layout()
plt.show()
# -
# Looks like the facility location function achieves high accuracy with only a small number of examples! Using only 40 examples achieves almost 90% accuracy, whereas it takes almost 200 randomly selected examples to get hat performance on average.
| tutorials/Workshop Tutorial 1. Practical Apricot Usage - Worksheet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tensorflow-gpu]
# language: python
# name: conda-env-tensorflow-gpu-py
# ---
# +
import IPython.display as ipd
import matplotlib.pyplot as plt
from scipy.io.wavfile import read, write
fs, data = read("Data/result.wav")
# Generate by aa_DR1_MCPM0_sa1_float (Corpus voice)
ipd.Audio(data, rate=16000, normalize=False)
plt.plot(data)
# -
fs, data = read("Data/aa_DR1_MCPM0_sa1_float.wav")
| .ipynb_checkpoints/audio-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sn
import pylab as pl
weather=['Sunny','Sunny','Overcast','Rainy','Rainy','Rainy','Overcast','Sunny','Sunny',
'Rainy','Sunny','Overcast','Overcast','Rainy']
temp=['Hot','Hot','Hot','Mild','Cool','Cool','Cool','Mild','Cool','Mild','Mild','Mild','Hot','Mild']
Humidity= ['High','High','High','High','Normal','Normal','Normal','High','Normal','Normal','Normal','High','Normal','High']
Windy=['f','t','f','f','f','t','t','f','f','f','t','t','f','t']
play=['No','No','Yes','Yes','Yes','No','Yes','No','Yes','Yes','Yes','Yes','Yes','No']
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
weather_encoded = le.fit_transform(weather)
weather_encoded
temp_encoded = le.fit_transform(temp)
temp_encoded
humidity_encoded = le.fit_transform(Humidity)
humidity_encoded
wind_encoded = le.fit_transform(Windy)
wind_encoded
play_encoded = le.fit_transform(play)
play_encoded
features = list(zip(weather_encoded, temp_encoded, humidity_encoded, wind_encoded))
features
from sklearn.linear_model import LogisticRegression
regr = LogisticRegression(solver = 'lbfgs', C = 0.01, max_iter = 300)
regr.fit(features, play_encoded)
play_encoded[0:5]
ypred = regr.predict(features)
ypred
a_1 = regr.score(features, play_encoded)
a_1
a_2 = regr.score(features, ypred)
a_2
from sklearn.metrics import accuracy_score
a_3 = accuracy_score(features, ypred)
a_3
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 1, metric='manhattan')
knn.fit(features, play_encoded)
play_encoded
yhat = knn.predict(features)
yhat
b_1 = knn.score(features, play_encoded)
b_1
b_2 = knn.score(features, yhat)
b_2
from sklearn import svm
clf = svm.SVC(kernel = 'rbf', gamma = 'auto')
clf.fit(features, play_encoded)
play_encoded
ypred_1 = clf.predict(features)
ypred_1
c_1 = clf.score(features, play_encoded)
c_1
c_2 = clf.score(features, ypred_1)
c_2
from sklearn import tree
dt = tree.DecisionTreeClassifier()
dt.fit(features, play_encoded)
play_encoded
yhat_1 = dt.predict(features)
yhat_1
d_1 = dt.score(features, play_encoded)
d_1
d_2 = dt.score(features, yhat_1)
d_2
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators = 1000)
rf.fit(features, play_encoded)
play_encoded
ypred_2 = rf.predict(features)
ypred_2
e_2 = rf.score(features, ypred_2)
e_2
e_1 = rf.score(features, play_encoded)
e_1
from sklearn.naive_bayes import GaussianNB, MultinomialNB, BernoulliNB
gsn = GaussianNB()
gsn.fit(features, play_encoded)
play_encoded
yhat_2 = gsn.predict(features)
yhat_2
f_1 = gsn.score(features, play_encoded)
f_1
f_2 = gsn.score(features, yhat_2)
f_2
mul = MultinomialNB()
mul.fit(features, play_encoded)
play_encoded
ypred_3 = mul.predict(features)
ypred_3
g_1 = mul.score(features, play_encoded)
g_1
g_2 = mul.score(features, ypred_3)
g_2
ber = BernoulliNB()
ber.fit(features, play_encoded)
yhat_3 = ber.predict(features)
yhat_3
h_1 = ber.score(features, play_encoded)
h_1
h_2 = ber.score(features, yhat_3)
h_2
df = pd.DataFrame({'Training Score' : [a_1, b_1, c_1, d_1, e_1, f_1, g_1, h_1],
'Predicted Score' : [a_2, b_2, c_2, d_2, e_2, f_2, g_2, h_2],
}, index = ['Logistic Regression', 'KNN', 'SVM', 'Decision Tree', 'Random Forest', 'Gaussian NB', 'Multinomial NB', 'Bernoulli NB'])
df
| Play_Golf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="-knK4sZodDZg"
# ##### Copyright 2018 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# + cellView="form" id="zAM8G9A4dF4R"
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="cPw5xFcq1kpw"
# # 8개 학교
#
# <table class="tfo-notebook-buttons" align="left">
# <td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Eight_Schools"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
# <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
# <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 보기</a></td>
# <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
# </table>
# + [markdown] id="5MzjGu_O7HwY"
# 8개 학교 문제(The eight schools problem)([Rubin 1981](https://www.jstor.org/stable/1164617))는 8개 학교에서 동시에 수행되는 SAT 코칭 프로그램의 효과를 고려하는 문제입니다. 이는 교환 가능한 그룹 간에 정보를 공유하기 위한 계층적 모델링의 유용성을 보여주는 고전적인 문제([Bayesian Data Analysis](http://www.stat.columbia.edu/~gelman/book/), [Stan](https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started))가 되었습니다.
#
# 아래 구현은 Edward 1.0 [튜토리얼](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb)을 수정한 것입니다.
# + [markdown] id="TNuvn0Ih4D_R"
# # 가져오기
# + id="XMTEI6ep4D_S"
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
import warnings
tf.enable_v2_behavior()
plt.style.use("ggplot")
warnings.filterwarnings('ignore')
# + [markdown] id="cIbNcemwwO2y"
# # 데이터
#
# 베이지안 데이터 분석, 5.5 섹션(Gelman 외, 2013)은 다음과 같습니다.
#
# > *각 8개 고등학교에서 SAT-V(Scholastic Aptitude Test-Verbal)에 대한 특수 코칭 프로그램의 효과를 분석하기 위해 교육 테스트 서비스를 위한 연구가 수행되었습니다. 각 연구의 결과 변수는 SAT-V의 특별 관리 점수로, 교육 시험 서비스에서 관리하고 대학의 입학 결정을 돕는 데 사용되는 표준화된 객관식 시험입니다. 점수는 200에서 800까지 다양하며 평균은 약 500, 표준 편차는 약 100입니다. SAT 시험은 특히 시험 성적 향상을 위한 단기적인 노력에 저항하도록 설계되었습니다. 대신 이들 시험은 수년간의 교육을 통해 습득한 지식과 개발된 능력을 반영하도록 설계되었습니다. 그런데도 이 연구에 참여한 각각의 8개 학교는 단기 코칭 프로그램이 SAT 점수를 높이는 데 매우 성공적이라고 생각했습니다. 또한 8개 프로그램 중 어떤 것이 다른 프로그램보다 더 효과적이거나 어떤 프로그램이 다른 프로그램보다 효과가 더 유사하다고 믿을만한 선행 사유가 없었습니다.*
#
# 각 8개 학교($J = 8$)에 대해 추정된 처치 효과 $y_j$와 효과 추정치 $\sigma_j$의 표준 오차가 있습니다. 이 연구에서 처치 효과는 PSAT-M 및 PSAT-V 점수를 통제 변수로 사용하여 처치 집단에 대한 선형 회귀를 통해 얻었습니다. 어느 학교가 서로 어느 정도 유사하거나, 어떤 코칭 프로그램이 더 효과적일 것이라는 기존 믿음이 없었으므로 처치 효과를 [교환 가능한 것](https://en.wikipedia.org/wiki/Exchangeable_random_variables)으로 간주할 수 있습니다.
# + id="rSngqHwAKv_j"
num_schools = 8 # number of schools
treatment_effects = np.array(
[28, 8, -3, 7, -1, 1, 18, 12], dtype=np.float32) # treatment effects
treatment_stddevs = np.array(
[15, 10, 16, 11, 9, 11, 10, 18], dtype=np.float32) # treatment SE
fig, ax = plt.subplots()
plt.bar(range(num_schools), treatment_effects, yerr=treatment_stddevs)
plt.title("8 Schools treatment effects")
plt.xlabel("School")
plt.ylabel("Treatment effect")
fig.set_size_inches(10, 8)
plt.show()
# + [markdown] id="S6Yj8WEDwI3L"
# # 모델
#
# 데이터를 캡처하기 위해 계층적 정규 모델을 사용합니다. 다음의 생성 과정을 따릅니다.
#
# $$ \begin{align*} \mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \ \log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \ \text{for } & i=1\ldots 8:\ & \theta_i \sim \text{Normal}\left(\text{loc}{=}\mu,\ \text{scale}{=}\tau \right) \ & y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right) \end{align*} $$
#
# 여기서 $\mu$는 이전 평균 처치 효과를 나타내며 $\tau$는 학교 간의 분산을 제어합니다. $y_i$ 및 $\sigma_i$가 관찰됩니다. $\tau \rightarrow \infty$로 모델은 풀링 없음 모델에 접근합니다. 즉, 각 학교 처치 효과 추정치가 더 독립적일 수 있습니다. $\tau \rightarrow 0$로 모델은 완전 풀링 모델에 접근합니다. 즉, 모든 학교 처치 효과가 그룹 평균 $\mu$에 더 가깝습니다. 표준 편차를 양수로 제한하기 위해 로그 정규 분포에서 $\tau$를 그립니다(정규 분포에서 $\log(\tau)$를 그리는 것과 동일함).
#
# [발산(divergence)을 사용한 바이어스 추론 진단](http://mc-stan.org/users/documentation/case-studies/divergences_and_bias.html)에 따라 위의 모델을 동등한 비 중심 모델로 변환합니다.
#
# $$ \begin{align*} \mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \ \log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \ \text{for } & i=1\ldots 8:\ & \theta_i' \sim \text{Normal}\left(\text{loc}{=}0,\ \text{scale}{=}1 \right) \ & \theta_i = \mu + \tau \theta_i' \ & y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right) \end{align*} $$
#
# 이 모델을 [JointDistributionSequential](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/JointDistributionSequential) 인스턴스로 실체화합니다.
# + id="EiEtvl1zokAG"
model = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=10., name="avg_effect"), # `mu` above
tfd.Normal(loc=5., scale=1., name="avg_stddev"), # `log(tau)` above
tfd.Independent(tfd.Normal(loc=tf.zeros(num_schools),
scale=tf.ones(num_schools),
name="school_effects_standard"), # `theta_prime`
reinterpreted_batch_ndims=1),
lambda school_effects_standard, avg_stddev, avg_effect: (
tfd.Independent(tfd.Normal(loc=(avg_effect[..., tf.newaxis] +
tf.exp(avg_stddev[..., tf.newaxis]) *
school_effects_standard), # `theta` above
scale=treatment_stddevs),
name="treatment_effects", # `y` above
reinterpreted_batch_ndims=1))
])
def target_log_prob_fn(avg_effect, avg_stddev, school_effects_standard):
"""Unnormalized target density as a function of states."""
return model.log_prob((
avg_effect, avg_stddev, school_effects_standard, treatment_effects))
# + [markdown] id="jnVK-1yH9WCY"
# # 베이지안 추론
#
# 주어진 데이터에서 해밀턴 몬테카를로(HMC)를 수행하여 모델의 매개변수에 대한 사후 확률 분포를 계산합니다.
# + id="-66vCUVrQRnb"
num_results = 5000
num_burnin_steps = 3000
# Improve performance by tracing the sampler using `tf.function`
# and compiling it using XLA.
@tf.function(autograph=False, experimental_compile=True)
def do_sampling():
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
tf.zeros([], name='init_avg_effect'),
tf.zeros([], name='init_avg_stddev'),
tf.ones([num_schools], name='init_school_effects_standard'),
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.4,
num_leapfrog_steps=3))
states, kernel_results = do_sampling()
avg_effect, avg_stddev, school_effects_standard = states
school_effects_samples = (
avg_effect[:, np.newaxis] +
np.exp(avg_stddev)[:, np.newaxis] * school_effects_standard)
num_accepted = np.sum(kernel_results.is_accepted)
print('Acceptance rate: {}'.format(num_accepted / num_results))
# + id="2-iMMOcFvE03"
fig, axes = plt.subplots(8, 2, sharex='col', sharey='col')
fig.set_size_inches(12, 10)
for i in range(num_schools):
axes[i][0].plot(school_effects_samples[:,i].numpy())
axes[i][0].title.set_text("School {} treatment effect chain".format(i))
sns.kdeplot(school_effects_samples[:,i].numpy(), ax=axes[i][1], shade=True)
axes[i][1].title.set_text("School {} treatment effect distribution".format(i))
axes[num_schools - 1][0].set_xlabel("Iteration")
axes[num_schools - 1][1].set_xlabel("School effect")
fig.tight_layout()
plt.show()
# + id="l4t9XLxSszBe"
print("E[avg_effect] = {}".format(np.mean(avg_effect)))
print("E[avg_stddev] = {}".format(np.mean(avg_stddev)))
print("E[school_effects_standard] =")
print(np.mean(school_effects_standard[:, ]))
print("E[school_effects] =")
print(np.mean(school_effects_samples[:, ], axis=0))
# + id="Wxp1uFW6RWMW"
# Compute the 95% interval for school_effects
school_effects_low = np.array([
np.percentile(school_effects_samples[:, i], 2.5) for i in range(num_schools)
])
school_effects_med = np.array([
np.percentile(school_effects_samples[:, i], 50) for i in range(num_schools)
])
school_effects_hi = np.array([
np.percentile(school_effects_samples[:, i], 97.5)
for i in range(num_schools)
])
# + id="yY-qBFTotd3F"
fig, ax = plt.subplots(nrows=1, ncols=1, sharex=True)
ax.scatter(np.array(range(num_schools)), school_effects_med, color='red', s=60)
ax.scatter(
np.array(range(num_schools)) + 0.1, treatment_effects, color='blue', s=60)
plt.plot([-0.2, 7.4], [np.mean(avg_effect),
np.mean(avg_effect)], 'k', linestyle='--')
ax.errorbar(
np.array(range(8)),
school_effects_med,
yerr=[
school_effects_med - school_effects_low,
school_effects_hi - school_effects_med
],
fmt='none')
ax.legend(('avg_effect', 'HMC', 'Observed effect'), fontsize=14)
plt.xlabel('School')
plt.ylabel('Treatment effect')
plt.title('HMC estimated school treatment effects vs. observed data')
fig.set_size_inches(10, 8)
plt.show()
# + [markdown] id="2dV93ZSzGSIm"
# 위의 `avg_effect` 그룹에 대한 축소를 관찰할 수 있습니다.
# + id="LcljZ1prD91d"
print("Inferred posterior mean: {0:.2f}".format(
np.mean(school_effects_samples[:,])))
print("Inferred posterior mean se: {0:.2f}".format(
np.std(school_effects_samples[:,])))
# + [markdown] id="vWPCzgk7IMgt"
# # 비평
#
# 사후 예측 분포, 즉 관측된 데이터 $y$를 고려하여 새 데이터 $y^*$의 모델을 구합니다.
#
# $$ p(y^*|y) \propto \int_\theta p(y^* | \theta)p(\theta |y)d\theta$$
#
# 모델의 확률 변수값을 재정의하여 사후 분포의 평균으로 설정하고, 해당 모델에서 샘플링하여 새 데이터 $y^*$를 생성합니다.
# + id="6eV4Cx0HQeMU"
sample_shape = [5000]
_, _, _, predictive_treatment_effects = model.sample(
value=(tf.broadcast_to(np.mean(avg_effect, 0), sample_shape),
tf.broadcast_to(np.mean(avg_stddev, 0), sample_shape),
tf.broadcast_to(np.mean(school_effects_standard, 0),
sample_shape + [num_schools]),
None))
# + id="y3c8W--fPmph"
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(predictive_treatment_effects[:, 2*i].numpy(),
ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect posterior predictive".format(2*i))
sns.kdeplot(predictive_treatment_effects[:, 2*i + 1].numpy(),
ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect posterior predictive".format(2*i + 1))
plt.show()
# + id="ATOOfzg0HMII"
# The mean predicted treatment effects for each of the eight schools.
prediction = np.mean(predictive_treatment_effects, axis=0)
# + [markdown] id="MkwASzOLSgbs"
# 처치 효과 데이터와 모델 사후 예측 간의 잔차를 볼 수 있습니다. 이는 모집단 평균에 대한 예상 효과의 축소를 보여주는 위의 플롯과 일치합니다.
# + id="ulqqNf_AHMBm"
treatment_effects - prediction
# + [markdown] id="0KMqrBaGRo4S"
# 각 학교에 대한 예측 분포가 있으므로 잔차 분포도 고려할 수 있습니다.
# + id="7j9RAYhIRDDz"
residuals = treatment_effects - predictive_treatment_effects
# + id="zW1RKYtBRIhd"
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(residuals[:, 2*i].numpy(), ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect residuals".format(2*i))
sns.kdeplot(residuals[:, 2*i + 1].numpy(), ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect residuals".format(2*i + 1))
plt.show()
# + [markdown] id="PIReUYcT0CEZ"
# # 감사의 말
#
# 이 튜토리얼은 원래 Edward 1.0([소스](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb))로 작성되었습니다. 해당 버전을 작성하고 수정하는데 기여해 주신 모든 분께 감사드립니다.
# + [markdown] id="g7cgoQ1XyqGv"
# # 참고 문헌
#
# 1. <NAME>. Estimation in parallel randomized experiments. Journal of Educational Statistics, 6(4):377-401, 1981.
# 2. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC, 2013.
| site/ko/probability/examples/Eight_Schools.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/vansjyo/DeepMars-Colab/blob/master/Preprocess.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="e0nxEiswCbO8" colab_type="code" colab={}
from google.colab import drive
drive.mount('/content/drive')
# + id="V4IyWkQ3sRFG" colab_type="code" colab={}
import keras
import matplotlib.pyplot as plt
import numpy as np
from glob import glob
from PIL import Image
import gdal
import cv2
from skimage import io
from keras.utils import to_categorical
from keras.preprocessing.image import ImageDataGenerator
# + id="BM7S3GBbsoQ0" colab_type="code" colab={}
train_images = sorted(glob("/content/drive/My Drive/DeepCrater/Test/*tif"))
label_images = sorted(glob("/content/drive/My Drive/DeepCrater/Labels/*tif"))
print("Labelled : ", len(label_images), "Training : ", len(train_images))
def ImportImage( filename):
img = io.imread(filename)
img = img.astype('float32') / 255
print(img.shape)
return np.array(img)[:]
X = np.array( [ ImportImage(img) for img in train_images ] )
Y = np.array( [ ImportImage(img) for img in label_images ] )
plt.imshow(X[0])
# + id="mtTYdTNtwgl2" colab_type="code" colab={}
augmented_x = []
augmented_y = []
# Augment rare
datagen = ImageDataGenerator(
rotation_range=90, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
zoom_range=[0.96,1], # set range for random zoom
horizontal_flip=True, # randomly flip images
vertical_flip=True, # randomly flip images
)
datagen.fit(X)
iteration = 0
limit = 20
for x_batch, y_batch in datagen.flow(X, Y, batch_size = 1):
fname_x = "/content/drive/My Drive/DeepCrater/Augmented//Train/Train_" + str(iteration) + ".tif"
fname_y = "/content/drive/My Drive/DeepCrater/Augmented/Label/Label_" + str(iteration) + ".tif"
# io.imsave(fname_x, x_batch)
# io.imsave(fname_y, y_batch)
augmented_x.extend(x_batch[:])
augmented_y.extend(y_batch[:])
iteration += 1
if iteration >= limit:
break
augmented_x = np.array(augmented_x)
augmented_y = np.array(augmented_y)
print(augmented_x.shape)
# + id="edK7k392IMic" colab_type="code" colab={}
fig,a = plt.subplots(10,2)
for i in range(10):
a[i][0].imshow(augmented_x[i])
a[i][1].imshow(augmented_y[i])
plt.figure()
# + id="4QXN2dvzx2eN" colab_type="code" colab={}
# Shuffle train and test
shufflePermutation = np.random.permutation(len(x_train))
x_train = x_train[shufflePermutation]
y_train = y_train[shufflePermutation]
| Preprocess.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sh
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Bash
# language: bash
# name: bash
# ---
# 
# # Shell Scripting
#
# ## Scripts
# A script is a command line program that contains a series of commands.
#
# Commands contained in the script are executed by an interpreter. In the case of shell scripts, the shell access the interpreter and executes the commands listed in the script one after the other.
#
# Anything you can execute at the command line, you can put into a shell script.
#
# Shell scripsts are great at automating tasks.
# If you find yourself running a series of commands to accomplish a given task and you need to perform that task again in the future, you can, and probably should, create a shell script for that task.
# **ADVERTENCIA**: Las celdas sólo pueden enviar comandos y desplegar el resultado de estos comandos. Si existe algún tipo de interacción adicional por parte del shell, la celda se quedará bloqueada y será necesario interrumpir el kernel para salir de ese estado.
#
# ### <span style="background-color: #FF9D5C">script.sh</span>
# +
#!/bin/bash
echo "Scripting is fun!"
# -
# ***
# | script.sh | Description |
# |:-------------------------|:------------|
# | #!/bin/bash | #! is known as shebang. It's useful for indicate which shell we will use. |
# | echo "Scripting is fun!" | The command that will be executed in the script |
# `$ chmod 755 script.sh` -> Converts the indicated file as executable.
#
# `$ ./script.sh` -> run the script
#
# Scripting is fun! -> Result of the script
# If you do not supply a **shebang** and specify an interpreter on the first line of the script, the commands in the scrip will be executed using your current shell.
#
# Different shells have slightly varying syntax.
# ## Variables
# * Storage locations that have a name
# * Name-value pairs
# * Syntax:
# VARIABLE_NAME="Value"
# **Make sure to not use any spaces before or after the equal sign.**
# * Variables are case sensitive
# * By convention variables are in all uppercase
# * Can contain letters, digits, and underscores
# * Can start with letters or underscores
# * **Cannot** start with a digit
# * **Cannot** contain special characteres such as "-" or "@"
# ## Variable Usage
#
# To use a variable, precede the variable name with a dollar sign:
# +
#!/bin/bash
MY_SHELL="bash"
echo "I like the $MY_SHELL shell."
# -
# ***
# You can also enclose the variable name in curly braces and precede the opening brace with a dollar sign.
#
# **The curly brace syntax is optional unless you need to immediately precede or follow the variable with additional data.**
# +
#!/bin/bash
MY_SHELL="bash"
echo "I like the ${MY_SHELL} shell."
# -
# ## Assign command output to a variable
# To do this,enclose the command in parenthesis and precede it with a dollar sign:
# +
#!/bin/bash
SERVER_NAME=$(hostname)
echo "You are running this script on ${SERVER_NAME}."
# -
# # Tests
# To create a test place a conditionalexpression between brackets.
#
# **Syntax:**
#
# [ condition-to-test-for ]
#
# **Example:**
#
# [ -e /etc/passwd ]
#
# Exits with a **status of 0 (true) or 1 (false)** depending on the evaluation of EXPR.
# You can run the command **`help test`** to see the varios types of tests you can perform.
help test
# You can also read the man page for test by typing in **`man test`**.
man test
# ## File operators (tests)
# -d FILE True if file is a directory.
#
# -e FILE True if file exists.
#
# -f FILE True if file exists and is a regular file.
#
# -r FILE True if file is readable by you.
#
# -s FILE True if file exists and is not empty.
#
# -w FILE True if file file is writable by you.
#
# -x FILE True if file is executable by you.
# ## String operators (test)
# -z STRING True if string is empty.
#
# -n STRING True if string is not empty.
#
# STRING1 = STRING2
#
# True if the strings are equal.
#
# STRING1 != STRING2
#
# True if the strings are not equal.
#
# STRING1 < STRING2
#
# True if STRING1 sorts before STRING2 lexicographically.
#
# STRING1 > STRING2
#
# True if STRING1 sorts after STRING2 lexicographically.
#
# ## Arithmetic operators (test)
# INTEGER1 **-eq** INTEGER2
#
# INTEGER1 is equal to INTEGER2
#
# INTEGER1 **-ge** INTEGER2
#
# INTEGER1 is greater than or equal to INTEGER2
#
# INTEGER1 **-gt** INTEGER2
#
# INTEGER1 is greater than INTEGER2
#
# INTEGER1 **-le** INTEGER2
#
# INTEGER1 is less than or equal to INTEGER2
#
# INTEGER1 **-lt** INTEGER2
#
# INTEGER1 is less than INTEGER2
#
# INTEGER1 **-ne** INTEGER2
#
# INTEGER1 is not equal to INTEGER2
# 
| 01_Shell_Scripting_Part_One.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import urllib.request
from bs4 import BeautifulSoup
url = 'http://www.yahoo.co.jp/'
ua = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) '\
'AppleWebKit/537.36 (KHTML, like Gecko) '\
'Chrome/55.0.2883.95 Safari/537.36 '
req = urllib.request.Request(url, headers={'User-Agent': ua})
html = urllib.request.urlopen(req)
soup = BeautifulSoup(html, "html.parser")
li_list = soup.find(id='yahooservice').find_all('li')
# li_list = soup.find('div', attrs={'id': 'yahooservice'}).find_all('li')
# li_list = soup.select('div#yahooservice > ul > li')
for li in li_list:
print(li.find('a').text)
| notebook/beautiful_soup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext lab_black
# %matplotlib inline
# # Build GeoJSON files from airspace environment data
from glob import glob
from collections import defaultdict
import matplotlib.pyplot as plt
from shapely.geometry import Polygon
import pandas as pd
import geopandas as gpd
# #### Airspace Environment data can be found in EUROCONTROL DDR2 web portal (Dataset Files/Airspace Environment Datasets)
env_folder = "your_ENV_folder"
# #### <br> Load airblocks coordinates
# +
with open(glob(env_folder + "/Sectors*.are")[0]) as are:
lines = are.readlines()
airblocks = defaultdict(list)
name = ""
for l in lines:
elems = l.strip().split(" ")
if len(elems) > 2:
name = elems[-1]
else:
airblocks[name].append((float(elems[1]) / 60, float(elems[0]) / 60)) # lon, lat
Polygon(airblocks["039LF"])
# -
# #### <br> Load elementary airspace volumes with associated airblocks
sls = pd.read_csv(
glob(env_folder + "/Sectors*.sls")[0],
sep=" ",
header=None,
names=["airspace", "_", "airblock", "flmin", "flmax"],
)
sls.drop(["_"], axis=1, inplace=True)
sls.head()
# #### Add airblocks geodata
sls["geometry"] = sls["airblock"].apply(lambda x: Polygon(airblocks[x]))
geo_sls = gpd.GeoDataFrame(sls)
geo_sls.head()
# #### <br> Load all collapse airspace volumes
# +
spc = pd.read_csv(glob(env_folder + "/Sectors*.spc")[0], sep=";")
collapse = defaultdict(list)
name = ""
for _, row in spc.iterrows():
if row[0] == "A":
name = row[1]
elif row[0] == "S":
collapse[name].append(row[1])
print(collapse["LFBBRL12"])
# -
# #### <br> Get 3D geometry of a collapse airspace volume
def geom(ap):
return geo_sls.query(f"airspace in {collapse[ap]}")
df = geom("LFBBUBDX")
df.head()
# #### Ex: 2D visualization of LFBBUBDX at FL195 and FL345
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 8))
df.query("flmin == 195").plot(ax=axes[0], alpha=0.3, color="green")
df.query("flmin == 345").plot(ax=axes[1], alpha=0.3, color="blue")
for i in range(2):
axes[i].set_axis_off()
axes[i].get_xaxis().set_visible(False)
axes[i].get_yaxis().set_visible(False)
fig.tight_layout()
plt.savefig("ubdx.png")
plt.show()
# #### <br> Export to GeoJSON
def gson(ap, file_name):
geo_sls.query(f"airspace in {collapse[ap]}").to_file(file_name, driver="GeoJSON")
gson("LFBBUBDX", "ubdx.geojson")
gdf = gpd.read_file("ubdx.geojson")
gdf.head()
| demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## vue实例
# - vue实例通过`new Vue()`来创建,让里面定义的`el`接管dom的某个位置的渲染.
#
# - data里面存放的都是数据.
#
# 我们可以使用插值表达式来进行调用.
#
# - ``<button v-on:click="handleBtnClick">提交</button>``
#
# `v-on:click="handleBtnClick"`相等于`@click="handleBtnClick"`
#
# `handleBtnClick`必须定义在Vue实例中的`methods`选项里面.
#
#
# - 定义一个全局组件
# ```
# Vue.component('item',{
# template: '<li>hello world<li/>'
# })
# ```
# 我们通过 `<item></item>`使用就可以了.
#
#
# - 定义一个局部组件
# ```
# var item ={'item',{
# template: '<li>hello world<li/>'
# }}
# ```
# - vm.$data获取定义好的数据内容
# ```
# var vm = new Vue({
# el: '',
# data(){
# return {
# list:[],
# inputValue: ''
# }
# }
# })
# ```
#
# 凡是以`$`开头的都是vue的实例属性.
#
# `$el`,`$data`都是vue的实例方法.
#
# 通过`vm.$destroy()`可以销毁vue实例.
#
# **每一个组件就是一个vue实例.**
# ## vue实例的生命周期的钩子
# 
#
# - 初始化事件和生命周期相关的内容结束后.在此,会执行`beforecreated()`函数.
#
# - 处理外部的一些注入(eg:数据的双向绑定)并作出响应.会触发`created()`事件.
#
# - vue实例中是否有`el`选项?没有就看看有没有 `template`这个事件.
#
# - vue实例中是否有`template`属性? 没有.把el外层的HTML当做模板. 有就使用他自己本身.
#
# - 这样模板编译完毕以后,就会触发`beforeMount()`这个事件
#
# - DOM节点就会挂载在页面之上,此时页面渲染完毕.就会触发`mounted()`这个事件
#
# - 当数据发生改变,页面没有渲染之前,会触发`beforeUpdate()`事件
#
# - 当数据发生改变,页面重新渲染完毕,会触发`updated()`事件
#
# **使用vm.$destroy()可以测试下面这两个方法**
#
# - 当页面的生命周期被销毁之前,会触发`beforeDestroy()`事件.
#
# - 当页面的生命周期被销毁完毕,会触发 `destroy()`这个事件.
#
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue生命周期</title>
# </head>
#
# <body>
# <div id="app">
# <input type="text" v-model="inputValue">
# <button @click='handleBtnClick'>点击</button>
# <ul>
# <todo-list @delete='handleBtnDelete' :content="item" v-for="(item,index) of list" :key="item.id">{{item}}</todo-list>
# </ul>
# </div>
# <script>
# var TodoList = ({
# props: ['content', 'index'],
# template: '<li @click="handleClick">{{content}}</li>',
# methods: {
# handleClick() {
# this.$emit('delete', this.index)
# }
# }
# })
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# list: [],
# inputValue: ''
# }
# },
# components: { TodoList: TodoList },
# methods: {
# handleBtnClick() {
# this.list.push(this.inputValue)
# this.inputValue = ''
# },
# handleBtnDelete(index) {
# this.list.splice(index, 1)
# }
# },
# beforeCreate() {
# console.log('初始化事件和生命周期');
# console.log('before created!');
#
# },
# created() {
# console.log('组件注册完毕并作出响应');
# console.log('created');
# },
# beforeMount() {
# console.log('判断是el来编译这个模板还是通过template来编译模板');
# console.log('before mount');
# },
# mounted() {
# console.log('模板编译完成!已经将DOM挂载在Vue实例当中!');
# console.log('mounted')
# },
# beforeUpdate() {
# console.log('数据发生变化,DOM改变的时候')
# console.log('beforeUpdate');
#
# },
# updated() {
# console.log('页面重新渲染完毕!');
# console.log('updated');
# },
# beforeDestroy() {
# console.log('Vue实例被销毁之前!');
# console.log('beforeDsstroy');
# },
# destroyed() {
# console.log('Vue实例被销毁!');
# console.log('destroyed');
# }
#
#
# })
# </script>
# </body>
# ```
# ## vue的模板语法
# - {{content}} 插值表达式
# - v-text 会对HTML标签进行转义
# - v-html 解析HTML进行展示
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue生命周期</title>
# </head>
#
# <body>
# <div id="app">
# {{name + '123'}}
#
# <div>v-text会进行转义,效果等价于插值表达式</div>
#
# <div v-text="name + '乐二恩'"></div>
#
# <div>v-html就直接按着html来进行解析</div>
#
# <div v-html="name2 +'乐二恩'"></div>
# </div>
# <script>
#
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# name: 'xiaoming',
# name2: '<h1>xiaoming</h1>'
# }
# }
#
# })
# </script>
# </body>
# ```
# ## vue中的计算属性,方法和侦听器
# ### (computed)计算属性会内置缓存
#
# 测试方法: 改变`data`中的值,比如下面: `vm.$data.age = 12`,并不会触发computed()里面的方法.
#
# 但是如果你改变计算属性中的值,比如 `firstName` of `lastName`,就会触发computed()里面的方法.
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue计算属性&侦听器方法</title>
# </head>
#
# <body>
# <div id="app">
# {{firstName + ' ' + lastName}}
#
# {{age}}
#
# {{fullName}}
#
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# lastName: 'luoyu',
# firstName: 'piaoshang',
# age: 28
# // fullName: '<NAME>'
# }
# },
# computed: {
# //计算属性: 内置缓存
# fullName() {
# console.log('计算属性内置缓存发生了变化');
#
# return this.firstName + ' ' + this.lastName
# }
# }
#
# })
# </script>
# </body>
# ```
# ### 方法的测试
#
# - 测试方法:
#
# - `vm.firstName= "li"`
#
# (index):43 方法不会内置缓存
#
# ```js
# methods: {
# fullNames() {
# console.log('方法不会内置缓存');
# return this.firstName + ' ' + this.lastName
# }
# }
# ```
#
# 调用方式: `{{fullNames()}}`
# ### 侦听器的方式
#
# ```js
# watch:{
# firstName(){
# console.log('监听到了firstName的数据变化');
# this.fullName = this.firstName +' ' + this.lastName
# },
# lastName(){
# console.log('监听到了lastName的数据变化');
# this.fullName = this.firstName +' ' + this.lastName
# }
# }
#
# ```
#
# 测试方式: `vm.$data.firstName = 'hah'` or `vm.$data.firstName = 'sda'`
# 综上,计算属性,`computed()`的方式最简单!
# ## 计算属性的Setter和Getter
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue计算属性&侦听器方法</title>
# </head>
#
# <body>
# <div id="app">
# {{fullName}}
# </div>
# <script>
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# lastName: 'luoyu',
# firstName: 'piaoshang',
# age: 28
# }
# },
# methods:{
# ObjToArr(){
# // 写着玩的
# return Array.prototype.slice.apply(arguments,[index,1])
# }
# },
# computed: {
# fullName: {
# get() {
# return this.firstName + ' ' +this.lastName
# },
# set(value) {
# var arr = value.split(" ")
# this.firstName = arr[0]
# this.lastName = arr[1]
# }
# }
# }
# })
# </script>
# </body>
# ```
#
# 测试: `vm.fullName ='<NAME>'`
# ## vue中的样式绑定
# ### class的对象绑定
#
# - 声明一个动态类,这个动态类的是以键值对的形式来表示的.
# - 绑定一个事件,通过取反操作操纵键值来切换不同的状态.
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# <style>
# .activated {
# color: red;
# }
# </style>
# </head>
#
# <body>
# <div id="app">
# <div @click="handleChangeColor" :class="{activated: isActivated}">hello world</div>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# isActivated: false
# }
# },
# methods: {
# handleChangeColor() {
# // if(this.isActivated === false){
# // this.isActivated = true
# // }else{
# // this.isActivated = false
# // }
#
# // 简写
# this.isActivated = !this.isActivated
# }
# }
# })
# </script>
# </body>
# ```
# ### 数组表示对象的形式来切换状态
#
# - 动态类的值是一个数组对象.我们只要改变这个数组中的值的状态,就可以改变动态类的状态.
# - 同样的道理绑定一个事件,进入事件中来操纵数据的改变.
# - 一般使用的方式建议使用三元表达式.
#
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# <style>
# .activated {
# color: red;
# }
#
# .activatedOne {
# border: 1px solid #3333
# }
# </style>
# </head>
#
# <body>
# <div id="app">
# <div @click="handleChangeColor" :class="[activated,activatedOne,activatedTwo]">hello world</div>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# activated: '',
# activatedOne: '',
# activatedTwo: 'activatedTwo'
# }
# },
# methods: {
# handleChangeColor() {
# this.activated = this.activated === 'activated' ? '' : 'activated'
# this.activatedOne = this.activatedOne === 'activatedOne' ? '' : 'activatedOne'
# }
# }
# })
# </script>
# </body>
# ```
# ## `:style`样式来改变元素的状态
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# <style>
# .activated {
# color: red;
# }
#
# .activatedOne {
# border: 1px solid #3333
# }
# </style>
# </head>
#
# <body>
# <div id="app">
# <div @click="handleChangeColor" :style="styleObj">hello world</div>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# styleObj: {
# color: "red"
# }
# }
# },
# methods: {
# handleChangeColor() {
# this.styleObj.color = this.styleObj.color === 'red' ? 'yellow' : 'red'
# }
# }
# })
# </script>
# </body>
# ```
# 通过数组的方式来写:
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# <style>
# .activated {
# color: red;
# }
#
# .activatedOne {
# border: 1px solid #3333
# }
# </style>
# </head>
#
# <body>
# <div id="app">
# <div @click="handleChangeColor" :style="[styleObj,{fontSize: '20px'}]">hello world</div>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# styleObj: {
# color: "red"
# }
# }
# },
# methods: {
# handleChangeColor() {
# this.styleObj.color = this.styleObj.color === 'red' ? 'yellow' : 'red'
# }
# }
# })
# </script>
# </body>
# ```
# ## Vue的条件渲染
# ## v-if='show',采用show来决定标签是否在页面中存在
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# </head>
#
# <body>
# <div id="app">
# <div v-if='show'>{{message}}</div>
# <button @click="handleShowDiv">点击</button>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# message: 'hi ming',
# show: false
# }
# },
# methods: {
# handleShowDiv() {
# this.show = this.show === false ? true : false
# }
# }
# })
# </script>
# </body>
# ```
# ### v-show的用法
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# </head>
#
# <body>
# <div id="app">
# <div v-show='show'>{{message}}</div>
# <button @click="handleShowDiv">点击</button>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# message: 'hi ming',
# show: false
# }
# },
# methods: {
# handleShowDiv() {
# this.show = this.show === false ? true : false
# }
# }
# })
# </script>
# </body>
# ```
# ### v-if和 v-show的区别
# 
#
# 你可以看到`v-show`上的标签已经渲染到页面上了,只是隐藏了标签.但是`v-if`直接让标签不存在页面上.
#
# ```html
# <div id="app">
# <div v-if='show' data-test='v-if'>{{message}}</div>
# <div v-show='show' data-test='v-show'>{{message}}</div>
# <button @click="handleShowDiv">点击</button>
# </div>
# ```
#
# 测试结果:
#
# 
#
# **所以我们使用`v-show`的要多一些,我们只需要隐藏DOM,并不是来回的增删DOM.**
# ### v-if & v-else
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# </head>
#
# <body>
# <div id="app">
# <div v-if='show'>{{message}}</div>
# <div v-else>{{message2}}</div>
#
# <!-- 不能再二者之间加别的标签 -->
# <button @click="handleShowDiv">点击</button>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# message: 'hi ming',
# message2: 'bye ming',
# show: false
# }
# },
# methods: {
# handleShowDiv() {
# this.show = this.show === false ? true : false
# }
# }
# })
# </script>
# </body>
# ```
# ### if-else if-else的用法
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# </head>
#
# <body>
# <div id="app">
# <div v-if='show === "a"'>{{message}} in a place</div>
# <div v-else-if='show ==="b"'>{{message2}} in b place</div>
# <div v-else>{{message3}} in others place</div>
#
# <!-- 不能再二者之间加别的标签 -->
# <button @click="handleShowDiv">点击</button>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# message: 'hi ming',
# message2: 'bye ming',
# message3: 'bye',
# show: 'a'
# }
# },
# methods: {
# handleShowDiv() {
# if (this.show === 'a') {
# this.show = 'b'
# }
# else if (this.show === 'b') {
# this.show = ''
# }
# }
# }
# })
# </script>
# </body>
# ```
# ### 解决vue中复用DOM的问题.
# 在使用input输入框的时候,会出现标签复用的情况.
#
# 我们需要添加key值vue不会再复用之前的标签,相当于ES6 中的Symbol一样
#
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# </head>
#
# <body>
# <div id="app">
# <div v-if='show'>
# <!-- 添加key值vue不会再复用之前的标签,相当于ES6 中的Symbol一样-->
# 用户名 <input key='username' />
# </div>
# <div v-else>
# 密码名 <input key='password' />
# </div>
#
# <button @click="handleShowDiv">点击</button>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# show: true
# }
# },
# methods: {
# handleShowDiv() {
# this.show = this.show === true ? false : true
# }
# }
# })
# </script>
# </body>
# ```
# ## vue渲染列表的注意事项
# ### 数组的循环
# - 不能采用下标的方式往数组里面添加元素.
#
# - 只能使用push的方式往数组里面添加元素.
#
# 数组常用的方法:
#
# pop - 弹出最后一项
#
# push - 增加一项
#
# shift - 数组第一项删除掉
#
# unshift -数组的第一项添加内容
#
# splice -数组的截取
#
# sort - 数组排序
#
# reverse - 数组取反
#
# split - 经常使用空格来进行打散数组.
#
# 改变里面数据的内容:
# - 第一种,数组的变异方法,删除一条,并往里面添加一条.`vm.list.splice(3,1,{id:'123345',text:'noone hello'})`
#
# - 第二种,数组是引用类型,只要改变他的引用.
#
# 即,这个数组替换.
#
# 比如说我们要改变list中的第二项,我们就直接改第二项就可以了.
#
# ```js
# vm.list = [
# {id:'131545345',text:'nihao'},
# {id:'131122145',text:'xxxxhao'},
# {id:'131123245',text:'dajiahao'}
# ]
# ```
#
# - 第三种是使用set方法
# ```js
# vm.$set(vm.list,2,{id:'121231',text:'xiaoasdsa'})
# ```
#
# 将`下标为2`的对象改为 `{id:'121231',text:'xiaoasdsa'}`
# 假如说,需要多个循环.我们可以这样写,但是template只能使用一次.而且template并不会被渲染到页面上.
#
# ```html
# <template v-for="(item,index) of list" :key="item.id">
# <div>{{item}}</div>
# <span>{{item}}</span>
# </template>
# ```
# ### 对象的循环
# ```html
# <!DOCTYPE html>
# <html lang="en">
#
# <head>
# <meta charset="UTF-8">
# <meta name="viewport" content="width=device-width, initial-scale=1.0">
# <meta http-equiv="X-UA-Compatible" content="ie=edge">
# <script src="https://unpkg.com/vue"></script>
# <title>Vue的样式绑定</title>
#
# </head>
#
# <body>
# <div id="app">
# <ul v-for="(item,key,index) of userInfo">
# {{item}} ----- {{key}} ---- {{index}}
# </ul>
# </div>
# <script>
# // 生命周期就是vue实例在某一个时间点会自动执行的函数
# var vm = new Vue({
# el: '#app',
# data() {
# return {
# userInfo: {
# name: 'Dilei',
# age: 27,
# gender: 'male',
# salary: 'secret'
# }
# }
# }
# })
# </script>
# </body>
# ```
# 往里面添加一条信息.但是页面不会立即响应.
# ```js
# vm.userInfo.address = 'beijing'
# ```
#
# 如果想立即响应,我们知道对象是引用类型,所以可以直接改变它的引用就可以了.
#
# ```
# vm.userInfo= {
# name: 'Dilei',
# age: 27,
# gender: 'male',
# salary: 'secret',
# address: 'beijing'
# }
# ```
#
# 或者使用下面的方法
# ## 使用set方法修改对象的值.
#
# ```js
# Vue.set(vm.userInfo,'address','tianjin')
# ```
#
# 等价于
#
# ```js
# vm.$set(vm.userInfo,'address','tianjin')
# ```
# +
from IPython.display import HTML
HTML('<iframe width="729" height="410" src="https://www.youtube.com/embed/j-3RwvWZoaU" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>')
# -
HTML('<iframe width="729" height="410" src="https://www.youtube.com/embed/nteDXuqBfn0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>')
| vue/2-vue.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="gtlAv2oq9yzd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fc8b2f7c-c030-4832-e407-7c4a8bd1a0c0"
#Write a Python program to find the sum of all elements in a list using loop.
#Input:- [10,20,30,40]
#Output:- 100
a=[10,20,30,40]
sum=0
for i in range(0,len(a)):
sum=sum+a[i]
print(sum)
# + id="gTbA8psa93Y_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="3c89ce20-afe6-4649-e964-f0861cf5a273"
#Write a Python program to find the multiplication of all elements in a list using loop.
#Input:- [10,20,30,40]
#Output:- 240000
a=[10,20,30,40]
mul=1
for i in range(0,len(a)):
mul=mul*a[i]
print(mul)
# + id="XBCrEtLN94Lj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="93e5ca31-2c76-4a1f-f39f-9c7ee8f3ca84"
#Write a Python program to find the largest number from a list using loop.
#Input:- [10,100,2321, 1,200,2]
#Output:- 2321
a=[10,100,2321,1,200,2]
for i in range(0 ,len(a)):
a.sort()
print("The largest number in the list:",a[-1])
# + id="l3paUtbH94Vw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="455a04de-649a-4c8c-be03-009dfdafc0d3"
#Write a Python program to find the smallest number from a list using loop.
#Input:- [10,100,2321, 1,200,2]
#Output:- 1
a=[10,100,2321,1,200,2]
for i in range(0 ,len(a)):
a.sort()
print("The largest number in the list:",a[0])
# + id="JTVgy5VD94cE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="73e2014b-3576-4d32-8e71-d93db1af98f9"
#Write a Python program to count the number of strings having length more than 2 and are palindrome in a list using loop.
#Input:- ['ab', 'abc', 'aba', 'xyz', '1991']
#Output:- 2
a=['ab', 'abc', 'aba', 'xyz', '1991']
for i in range(0,len(a)):
if len(a[i])>2 :
i=i+1
print(i)
# + id="xs-ZGJbJ94gr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="56cab3d8-10df-48e1-caa2-b582095ef68c"
#Write a Python program to sort a list in ascending order using loop.
#Input:- [100,10,1,298,65,483,49876,2,80,9,9213]
#Output:- [1,2,9,10,65,80,100,298,483,9213,49876]
def bs(a):
b=len(a)-1
for x in range(b):
for y in range(b-x):
if a[y]>a[y+1]:
a[y],a[y+1]=a[y+1],a[y]
return a
a=[100,10,1,298,65,483,49876,2,80,9,9213]
bs(a)
# + id="VhKQJizZ94kg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a2100228-e8a6-45af-87a8-c17d61d605fd"
#Write a Python program to get a sorted list in increasing order of last element in each tuple in a given list using loop.
#Input:- [(5,4),(9,1),(2,3),(5,9),(7,6),(5,5)]
#output:- [(9,1),(2,3),(5,4),(5,5),(7,6),(5,9)]
def bs(a):
b=len(a)-1
for x in range(b):
for y in range(b-x):
if a[y]>a[y+1]:
a[y],a[y+1]=a[y+1],a[y]
return a
a=[(5,4),(9,1),(2,3),(5,9),(7,6),(5,5)]
bs(a)
# + id="YJrCLYp694n_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2161da0a-b1cf-4b7b-db6d-0f0efb5e0d8f"
#Write a Python program to remove duplicate element from a list using loop.
#Input:- [10,1,11,1,29,876,768,10,11,1,92,29,876]
#Output:- [10,1,11,29,876,768,92]
def dupli(duplicate):
final_list=[]
for i in duplicate:
if(i not in final_list):
final_list.append(i)
return final_list
duplicate= [10,1,11,1,29,876,768,10,11,1,92,29,876]
print(dupli(duplicate))
# + id="koYe4YCu94rh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1767c90d-a78e-41fe-b041-4c0a61689c36"
#Write a Python program to check a list is empty or not?
#Input:- []
#Output:- List is empty
#Input:- [10,20,30]
#Output:- List is not empty
a=[10,20,30]
b=len(a)
if b==0:
print("List is empty")
else:
print("List is not empty")
# + id="mES1WANa94u5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ff6c9fd4-1fa9-4cf9-8f7f-08fae7ba63de"
#Write a Python program to copy a list using loop.
#inp_lst = [10,10.20,10+20j, 'Python', [10,20], (10,20)]
#out_lst = [10,10.20,10+20j, 'Python', [10,20], (10,20)]
a=[10,10.20,10+20j, 'Python', [10,20], (10,20)]
list=[]
def copy(a):
list=[]
for i in a:
list.append(i)
return list
a=[10,10.20,10+20j, 'Python', [10,20], (10,20)]
print(copy(a))
# + id="CpBh31WO94yi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="4e634152-bc05-4bec-f3eb-5804f46ff2dd"
#Write a Python program to find the list of words that are longer than or equal to 4 from a given string.
#Input:- 'How much wood would a woodchuck chuck if a woodchuck could chuck wood'
#Output:- ['much', 'wood', 'would', 'woodchuck', 'chuck', 'could', 'could']
#Note:- Duplicate should be avoided.
a='How much wood would a woodchuck chuck if a woodchuck could chuck wood'
b=a.split()
c=[]
print(b)
print(len(b))
for i in range(0,13):
if (len(b[i])>4):
if b[i] not in c:
(c.append(b[i]))
print(c)
# + id="YVfj0fgZ9416" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dfd09593-d064-4022-ac08-cc490a01c531"
#Write a Python program which takes two list as input and returns True if they have at least 3 common elements.
#inp_lst1 = [10,20,'Python', 10.20, 10+20j, [10,20,30], (10,20,30)]
#inp_lst2 = [(10,20,30),1,20+3j,100.2, 10+20j, [10,20,30],'Python']
#Output:- True
a=[10,20,'Python', 10.20, 10+20j, [10,20,30], (10,20,30)]
b=[(10,20,30),1,20+3j,100.2, 10+20j, [10,20,30],'Python']
k=0
for i in range(0,len(a)):
for j in range(0,len(b)):
if (a[i]==b[j]):
k=k+1
if (k>3):
print('True')
# + id="7vwc3pjR945O" colab_type="code" colab={}
#Write a Python program to create a 4X4 2D matrix with below elements using loop and list comprehension both.
#Output:- [[0,0,0,0],[0,1,2,3],[0,2,4,6],[0,3,6,9]]
# + id="Pxafor84948b" colab_type="code" colab={}
#Write a Python program to create a 3X4X6 3D matrix wiith below elements using loop
#Output:-
# [
# [[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0],[0,0,0,0,0,0]],
# [[0,0,0,0,0,0],[1,1,1,1,1,1],[2,2,2,2,2,2],[3,3,3,3,3,3]],
# [[0,0,0,0,0,0],[2,2,2,2,2,2],[4,4,4,4,4,4],[6,6,6,6,6,6]]
# ]
# + id="lYNv8gk794_p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="89c3d50f-e2d4-4a69-b787-8db8958a1d99"
#Write a Python program which takes a list of numbers as input and prints a new list after removing even numbers from it.
#Input:- [10,21,22,98,87,45,33,1,2,100]
#Output:- [21,87,45,33,1]
my_str=[10,21,22,98,87,45,33,1,2,100]
my_str = [x for x in my_str if x%2!=0]
print(my_str)
# + id="3BaCb-rB95B7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="477fbe41-6959-46c1-e0f7-ca7b682bb521"
#Write a Python program which takes a list from the user and prints it after reshuffling the elements of the list.
#Input:- [10,21,22,98,87,45,33,1,2,100]
#Output:- [1,87,21,10,33,2,100,45,98,22] (It may be any randon list but with same elements)
import random
my_str=[10,21,22,98,87,45,33,1,2,100]
random.shuffle(my_str)
print("The shuffled list is:"+str(my_str))
| Harshitha/Copy_of_List_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="LLUZb-SYR86C"
import numpy as np
import tensorflow as tf
def deepfool(image, model, num_classes=10, overshoot=0.02, max_iter=50, shape=(28, 28, 1)):
image_array = np.array(image)
image_norm = tf.cast(image_array / 255.0 - 0.5, tf.float32)
image_norm = np.reshape(image_norm, shape)
image_norm = image_norm[tf.newaxis, ...]
f_image = model(image_norm).numpy().flatten()
I = (np.array(f_image)).flatten().argsort()[::-1]
I = I[0:num_classes]
label = I[0]
input_shape = np.shape(image_norm)
pert_image = copy.deepcopy(image_norm)
w = np.zeros(input_shape)
r_tot = np.zeros(input_shape)
loop_i = 0
x = tf.Variable(pert_image)
fs = model(x)
k_i = label
print(fs)
def loss_func(logits, I, k):
return logits[0, I[k]]
while k_i == label and loop_i < max_iter:
pert = np.inf
one_hot_label_0 = tf.one_hot(label, num_classes)
with tf.GradientTape() as tape:
tape.watch(x)
fs = model(x)
loss_value = loss_func(fs, I, 0)
grad_orig = tape.gradient(loss_value, x)
for k in range(1, num_classes):
one_hot_label_k = tf.one_hot(I[k], num_classes)
with tf.GradientTape() as tape:
tape.watch(x)
fs = model(x)
loss_value = loss_func(fs, I, k)
cur_grad = tape.gradient(loss_value, x)
w_k = cur_grad - grad_orig
f_k = (fs[0, I[k]] - fs[0, I[0]]).numpy()
pert_k = abs(f_k) / np.linalg.norm(tf.reshape(w_k, [-1]))
if pert_k < pert:
pert = pert_k
w = w_k
r_i = (pert + 1e-4) * w / np.linalg.norm(w)
r_tot = np.float32(r_tot + r_i)
pert_image = image_norm + (1 + overshoot) * r_tot
x = tf.Variable(pert_image)
fs = model(x)
k_i = np.argmax(np.array(fs).flatten())
loop_i += 1
r_tot = (1 + overshoot) * r_tot
return r_tot, loop_i, label, k_i, pert_image
| DeepFool.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
country_data = pd.read_csv('products_countries.tsv', sep='\t')
country_data.head()
country_data['country'].value_counts()
products =pd.read_csv('products.tsv', sep='\t')
products.head()
product_countries = products
product_countries = pd.merge(left=products, right=country_data, on="code", how="outer")
product_countries.columns
product_countries.head()
product_countries.columns
p_countries_alcohol = product_countries[['code','country','alcohol_100g']]
p_countries_alcohol=p_countries_alcohol.dropna()
p_countries_alcohol.head()
p_countries_alcohol.count()
p_countries_alcohol
| tsv/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from sklearn.metrics import log_loss
# -
train_targets_scored = pd.read_csv('data/raw/train_targets_scored.csv')
target_cols = train_targets_scored.drop('sig_id', axis=1).columns.values.tolist()
# +
base = pd.read_csv('oof/3lay_1024_20.0_quantile_250.csv')
base.head()
# +
best = pd.read_csv('oof/4lay_600_15.0_quantile_500.csv')
best.head()
# +
tot = pd.merge(base, best, on='sig_id')
assert tot.shape[0] == base.shape[0]
assert tot.shape[0] == best.shape[0]
tot.head()
# -
def make_blend(weight, tot):
res = tot[['sig_id']].copy()
for col in target_cols:
res[col] = weight * tot[f'{col}_x'] + (1 - weight) * tot[f'{col}_y']
valid_results = train_targets_scored.drop(columns=target_cols).merge(res[['sig_id']+target_cols], on='sig_id', how='left').fillna(0)
y_true = train_targets_scored[target_cols].values
y_pred = valid_results[target_cols].values
score = 0
for i in range(len(target_cols)):
score_ = log_loss(y_true[:, i], y_pred[:, i])
score += score_ / len(target_cols)
return score*10000000
# +
vals = np.arange(0.1, 1, 0.1)
res = []
for val in vals:
res.append(make_blend(val, tot))
summary = pd.DataFrame({'weight': vals, 'score': res})
summary['score'] = summary['score'].astype('float64')
summary
# -
summary.plot(x='weight')
| mechanism_of_action/moa_blends.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import json
import torch
from minicons import scorer
from torch.utils.data import DataLoader
from tqdm import tqdm
from sklearn.metrics import accuracy_score, f1_score
# -
def get_anli(split = "dev"):
labels = []
with open(f"../data/anli/{split}-labels.lst", "r") as f:
for line in f:
labels.append(int(line))
anli = []
with open(f"../data/anli/{split}.jsonl", "r") as f:
for line in f:
data = json.loads(line)
anli.append([data['obs2'], f"{data['obs1']} {data['hyp1']}", f"{data['obs1']} {data['hyp2']}"])
return anli, labels
anli, labels = get_anli("test")
gpt = scorer.IncrementalLMScorer("gpt2-medium")
labels[5:10]
anli[5:10]
anli_dl = DataLoader(anli[5:10], batch_size = 5)
for batch in anli_dl:
pass
obs2, hyp1, hyp2 = batch
hyp1_scores = []
hyp2_scores = []
hyp1_scores.extend(gpt.partial_score(list(hyp1), list(obs2), reduction=lambda x: x.sum(0).item()))
hyp2_scores.extend(gpt.partial_score(list(hyp2), list(obs2), reduction=lambda x: x.sum(0).item()))
predicted = (torch.stack((torch.tensor(hyp1_scores), torch.tensor(hyp2_scores))).argmax(0)+1)
labels[:10][:5], predicted
list(zip(hyp1_scores, hyp2_scores))
gpt.token_score("Tom applied for a job at a call center. His buddy Charles was already there. Tom and his friend had a lot of fun working together.")
# +
x = [('Tom', -2.4024810791015625),
('and', -3.6292877197265625),
('his', -0.414031982421875),
('friend', -5.403038024902344),
('had', -3.56390380859375),
('a', -1.6639251708984375),
('lot', -3.6679534912109375),
('of', -0.35161805152893066),
('fun', -1.5521011352539062),
('working', -2.3483734130859375),
('together', -1.6277923583984375),
('.', -0.5834808349609375)]
words, lps = list(zip(*x))
# -
torch.tensor(lps).sum()
list(zip(hyp2, obs2, hyp2_scores))
hyp1_scores = []
hyp2_scores = []
for batch in tqdm(anli_dl):
obs2, hyp1, hyp2 = batch
hyp1_scores.extend(gpt.partial_score(list(hyp1), list(obs2)))
hyp2_scores.extend(gpt.partial_score(list(hyp2), list(obs2)))
predicted = (torch.stack((torch.tensor(hyp1_scores), torch.tensor(hyp2_scores))).argmax(0)+1)
torch.stack((torch.tensor(hyp1_scores), torch.tensor(hyp2_scores))).argmax(0)+1
(torch.tensor(labels) == predicted).float().mean()
len(labels) == len(predicted)
accuracy_score(labels, predicted)
torch.tensor(hyp1_scores)
gpt.partial_score(list(hyp1), list(obs2)), gpt.partial_score(list(hyp2), list(obs2))
| src/anli.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Testing if a Distribution is Normal
# ## Imports
# +
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import quiz_tests
# Set plotting options
# %matplotlib inline
plt.rc('figure', figsize=(16, 9))
# -
# ## Create normal and non-normal distributions
# +
# Sample A: Normal distribution
sample_a = stats.norm.rvs(loc=0.0, scale=1.0, size=(1000,))
# Sample B: Non-normal distribution
sample_b = stats.lognorm.rvs(s=0.5, loc=0.0, scale=1.0, size=(1000,))
# -
# ## Boxplot-Whisker Plot and Histogram
#
# We can visually check if a distribution looks normally distributed. Recall that a box whisker plot lets us check for symmetry around the mean. A histogram lets us see the overall shape. A QQ-plot lets us compare our data distribution with a normal distribution (or any other theoretical "ideal" distribution).
# Sample A: Normal distribution
sample_a = stats.norm.rvs(loc=0.0, scale=1.0, size=(1000,))
fig, axes = plt.subplots(2, 1, figsize=(16, 9), sharex=True)
axes[0].boxplot(sample_a, vert=False)
axes[1].hist(sample_a, bins=50)
axes[0].set_title("Boxplot of a Normal Distribution");
# Sample B: Non-normal distribution
sample_b = stats.lognorm.rvs(s=0.5, loc=0.0, scale=1.0, size=(1000,))
fig, axes = plt.subplots(2, 1, figsize=(16, 9), sharex=True)
axes[0].boxplot(sample_b, vert=False)
axes[1].hist(sample_b, bins=50)
axes[0].set_title("Boxplot of a Lognormal Distribution");
# Q-Q plot of normally-distributed sample
plt.figure(figsize=(10, 10)); plt.axis('equal')
stats.probplot(sample_a, dist='norm', plot=plt);
# Q-Q plot of non-normally-distributed sample
plt.figure(figsize=(10, 10)); plt.axis('equal')
stats.probplot(sample_b, dist='norm', plot=plt);
# ## Testing for Normality
# ### Shapiro-Wilk
#
# The Shapiro-Wilk test is available in the scipy library. The null hypothesis assumes that the data distribution is normal. If the p-value is greater than the chosen p-value, we'll assume that it's normal. Otherwise we assume that it's not normal.
# https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.stats.shapiro.html
# +
def is_normal(sample, test=stats.shapiro, p_level=0.05, **kwargs):
"""Apply a normality test to check if sample is normally distributed."""
t_stat, p_value = test(sample, **kwargs)
print("Test statistic: {}, p-value: {}".format(t_stat, p_value))
print("Is the distribution Likely Normal? {}".format(p_value > p_level))
return p_value > p_level
# Using Shapiro-Wilk test (default)
print("Sample A:-"); is_normal(sample_a);
print("Sample B:-"); is_normal(sample_b);
# -
# ## Kolmogorov-Smirnov
#
# The Kolmogorov-Smirnov is available in the scipy.stats library. The K-S test compares the data distribution with a theoretical distribution. We'll choose the 'norm' (normal) distribution as the theoretical distribution, and we also need to specify the mean and standard deviation of this theoretical distribution. We'll set the mean and stanadard deviation of the theoretical norm with the mean and standard deviation of the data distribution.
#
# https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.kstest.html
# # Quiz
#
# To use the Kolmogorov-Smirnov test, complete the function `is_normal_ks`.
#
# To set the variable normal_args, create a tuple with two values. An example of a tuple is `("apple","banana")`
# The first is the mean of the sample. The second is the standard deviation of the sample.
#
# **hint:** Hint: Numpy has functions np.mean() and np.std()
# +
def is_normal_ks(sample, test=stats.kstest, p_level=0.05, **kwargs):
"""
sample: a sample distribution
test: a function that tests for normality
p_level: if the test returns a p-value > than p_level, assume normality
return: True if distribution is normal, False otherwise
"""
normal_args = (sample.mean(), sample.std())
t_stat, p_value = test(sample, 'norm', normal_args, **kwargs)
print("Test statistic: {}, p-value: {}".format(t_stat, p_value))
print("Is the distribution Likely Normal? {}".format(p_value > p_level))
return p_value > p_level
quiz_tests.test_is_normal_ks(is_normal_ks)
# -
# Using Kolmogorov-Smirnov test
print("Sample A:-"); is_normal_ks(sample_a);
print("Sample B:-"); is_normal_ks(sample_b);
# If you're stuck, you can also check out the solution [here](test_normality_solution.ipynb)
| module02/lesson12/08_normality/test_normality.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ed5bpKaqUHMU"
# # [농업 영상 AI 데이터 활용 아이디어 해커톤 대회](http://k-farmdata.com/hackathon/)
# NIA와 경상대에서 주최한 AI 해커톤 대회입니다
# 최우수상을 수상하였고 머신러닝과 딥러닝의 차이점에 대해 명확히 알 수 있었습니다
#
# + [markdown] id="pbelYileUb0n"
# [](https://colab.research.google.com/github/DongChanKIM2/AI-Data-Hackathon-Competiton/blob/main/Fruit_CNN_pre.ipynb)
# + [markdown] id="AgViz8KsUN0S"
# # Data preprocessing
# colab에 제공할 데이터를 image data를 pickle(X, y) 형태로 가공하는 과정<br>
# 작업폴더구조<br>
# data/[apple_img, apple_label,..., radish_img, radish_label]<br>
# apple_img/[apple_fuji_L_1-1.png, ...]<br>
# apple_label/[apple_fuji_L_1-1.json, ...]
# + id="_JX-LFyEUHMo"
import json
import glob # 경로명을 이용해 파일 리스트 추출
import os # 현재 위치 및 경로 병합
import pandas as pd # 데이터
from tqdm import tqdm # 실행시간 보여주기
import numpy as np # 수학연산
import pickle # pickle
# + id="tpU9U-O8UHMu"
# 이미지, plotting 관련
import matplotlib.pyplot as plt
from skimage.io import imread
from skimage.transform import resize
# + id="xJoq15Z6UHMx"
def get_label(filenames):
"""
각각의 파일명(key)에 대응하는 상품 등급(value)
담은 dictionary 반환
"""
labels = {}
for f in tqdm(filenames):
with open(f, "r", encoding='utf-8') as json_file:
try:
d = json.load(json_file)
except UnicodeDecodeError:
print('Unicode_error')
continue
file_name = f.split('\\')[-1]
file_name = file_name.split('.')[0]
labels[file_name] = d['cate3']
return labels
# + id="HeSS8vFYUHMy"
def get_raw_imgs(img_names):
"""
파일명을 input, 각각의 이미지를 100x100x3 list로 반환
"""
imgs = []
for i in img_names:
imgs.append(imread(i))
return imgs
# + id="6yMZOvngUHMz"
def make_dataset(img_names, labels):
X = []
y = []
fn = []
for img_nm in tqdm(img_names):
try:
img_raw = imread(img_nm) # 0~255 이미지 데이터
if img_raw.shape[-1] == 4:
img_raw = img_raw[:,:,:-1]
except ValueError:
print('Error about image')
continue
pure_nm = img_nm.split('\\')[-1].split('.')[0] # 순수한 파일 명(확장자 제외)
# img_raw = resize(img_raw, (100, 100)) # min_max scaling
label = labels.get(pure_nm, 0)
if label: # label이 0이 아니면
X.append(img_raw)
y.append(label)
fn.append(pure_nm)
X = np.array(X)
y = np.array(y)
fn = np.array(fn)
return X, y, fn
# + id="opgbpc2uUHM1"
# 총 10가지의 작물 이름
all_file_names = glob.glob('data/*')
fruit_names = list(set(x.split('\\')[-1].split('_')[0] for x in all_file_names))
fruit_names.sort()
# + id="jlIrBBy_UHM2"
subclass_dict = {}
# + id="KUEmAN0pUHM3"
for fruit_name in tqdm(fruit_names):
img_names = glob.glob(f'data/{fruit_name}_img/*')
subclass = set([x.split('\\')[-1].split('_')[1] for x in img_names])
subclass_dict[fruit_name] = subclass
# + id="egmrXdscUHM4"
subclass_dict['chinese'] = set(['cabbage'])
# + id="RAqxB2HeUHM5" outputId="4cfaa48f-8135-492f-844c-37c6e4ccdd97"
subclass_dict # 서브클래스 확인
# + [markdown] id="J3QPWyYuUHM-"
# ## Plotting
# + id="TNYbk42cUHM_"
def get_raw_imgs(img_names):
"""
파일명을 input, 각각의 이미지를 100x100x3 list로 반환
"""
imgs = []
for i in img_names:
imgs.append(imread(i))
return imgs
# + id="EtENeoYpUHNA"
def img_plotting(imgs, titles, *args):
"""
plotting 함수
"""
plt.figure(figsize=(12, 6))
titles = ['_'.join(x.split('\\')[-1].split('_')[:3]) for x in img_locs]
for i in range(args[0]*args[1]):
ax = plt.subplot(args[0], args[1], i + 1)
plt.imshow(imgs[i])
plt.title(titles[i])
plt.axis("off")
# + id="wbtZJULMUHNB"
n = 20
np.random.seed(2021)
for fruit_name in fruit_names:
for subclass in subclass_dict[fruit_name]:
img_names = glob.glob(f'data/{fruit_name}_img/*')
if subclass == 'cabbage':
img_locs = np.random.choice(img_names, n, replace=False)
else:
img_locs = np.random.choice([x for x in img_names if x.split('\\')[-1].split('_')[1] == subclass],
n, replace=False)
imgs_info = get_raw_imgs(img_locs)
img_plotting(imgs_info, img_locs, 2, 5)
plt.savefig(f'./{fruit_name}_{subclass}.png')
# + id="zmYRfh_nUHNC"
# X, y, filename 을 담고있는 pickle 데이터 저장
for fruit_name in fruit_names:
print(f'process {fruit_name}')
filenames = glob.glob(f'data/{fruit_name}_label/*')
labels = get_label(filenames)
img_names = glob.glob(f'data/{fruit_name}_img/*')
X, y, fn = make_dataset(img_names, labels)
dataset = {'X': X, 'y': y, 'file_name': fn}
with open(f'data_colab/{fruit_name}.pkl', 'wb') as f:
pickle.dump(dataset, f)
| Fruit_CNN_pre.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
DISPLAY_ALL_TEXT = False
pd.set_option("display.max_colwidth", 0 if DISPLAY_ALL_TEXT else 50)
# +
### Dataset contains text and disease and drug mentions which were extracted from the DailyMed drug lables
# -
# ### Dataset labelled by experts to be used as test data
experts = pd.read_csv('../data/input/expert_resolved_all.csv')
experts.sample()
experts = experts.rename(columns={'drug_name':'Drug_name',
'disease_name':'disease', 'context':'Context',
'do_id':'DO_ID','drug_id':'DB_ID',
'label_id':'Label_ID', 'expert_consensus':'relation'})
#experts[['DB_ID', 'DO_ID','disease', 'Label_ID', 'Set_ID','relation', 'Drug_name', 'Context','Section']]
experts.DB_ID='Compound::'+experts.DB_ID
experts.DO_ID= 'Disease::DOID:'+experts.DO_ID.str[5:]
experts.head()
experts.relation = experts.relation =='Indication: Treatment'
experts.relation.sum()
df_het = pd.read_csv('../data/input/hetionet_ground_truth.csv')
#df_het= df_het.append(experts[['DB_ID','DO_ID']])
df_het.DB_ID='Compound::'+df_het.DB_ID
df_het.DO_ID= 'Disease::'+df_het.DO_ID
df_het.head()
from sklearn.model_selection import train_test_split
df_het_train, df_het_test =train_test_split(df_het, shuffle=False, test_size=0.2)
df_het_train.shape
indications_het =set()
experts_dict = {}
for i,row in df_het_train.iterrows():
#print (row['a'], row['b'])
indications_het.add((str(row['DB_ID']),str(row['DO_ID'])))
#break
len(indications_het)
('Compound::DB08906', 'Disease::DOID:2841') in indications_het
whole_dr_di = []
for dr in df_het.DB_ID.unique():
for di in df_het.DO_ID.unique():
if (dr, di) in indications_het:
#print ((dr, di))
whole_dr_di.append([dr, di, 1])
else:
whole_dr_di.append([dr, di, 0])
df_train = pd.DataFrame(whole_dr_di, columns=['DB_ID', 'DO_ID','label'])
# ### Define KG Rules as Labeling Functions in Snorkel
# Snorkel is data labeling framework where multiple heuristic and programatic rules can be combined to assign a label to training data
# Labels to be assigned
POSITIVE = 1 # positive that the drug indication is recommendable
NEGATIVE = 0 # negative that the drug indication is recommendable
ABSTAIN = -1 # not sure whether the drug indication is recommendable or not
# ### Embedding
# ### TransE Embeddings (Pre-Trained)
# +
import numpy as np
import csv
from sklearn.metrics.pairwise import cosine_similarity
import torch.nn.functional as fn
import torch as th
node_emb = np.load('../embed/DRKG_TransE_l2_entity.npy')
relation_emb = np.load('../embed/DRKG_TransE_l2_relation.npy')
print(node_emb.shape)
print(relation_emb.shape)
# +
entity2id = {}
id2entity = {}
with open("../embed/entities.tsv", newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t', fieldnames=['entity','id'])
for row_val in reader:
id = row_val['id']
entity = row_val['entity']
entity2id[entity] = int(id)
id2entity[int(id)] = entity
print("Number of entities: {}".format(len(entity2id)))
# +
relation2id = {}
id2relation = {}
with open("../embed/relations.tsv", newline='', encoding='utf-8') as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t', fieldnames=['relation','id'])
for row_val in reader:
id = row_val['id']
relation = row_val['relation']
#print (relation, id)
relation2id[relation] = int(id)
id2relation[int(id)] = relation
print("Number of relations: {}".format(len(relation2id)))
# -
node_emb[entity2id['Compound::DB00970']][:10]
# +
i =0
for e in entity2id:
if 'Compound::DB' in e or 'Disease::DOID:' in e:
i+=1
print (i)
with open('../embed/DRGK_entity_embedding.txt','w') as fw:
fw.write(str(i)+' '+str(node_emb.shape[1])+'\n')
for e in entity2id:
if 'Compound::DB' in e or 'Disease::DOID:' in e:
fw.write(e+' '+" ".join( [str(v) for v in node_emb[entity2id[e]]] )+'\n')
# -
# ### RDF2VEC Emebdding
from gensim.models import KeyedVectors
word_vectors = KeyedVectors.load_word2vec_format('../embed/DRGK_entity_embedding.txt', binary=False)
word_vectors.most_similar('Disease::DOID:2841')
word_vectors.similar_by_vector(word_vectors['Compound::DB00970'])
# +
# Use Embedding model in the labeling function
# -
# ### A probablistic rule is a rule that contains uncertainty about reasoning
# Based on the guilt-by-association principle (e.g. Barabasi et al., 2011;
# Chiang and Butte, 2009), new disease–drug relationships can be inferred
# through existing relationships between similar diseases and similar drugs.
# #### Example: If drug ${dr}$ treats disease ${ds}$ and disease ${ds}$ is similar to disease ${ds'}$ then drug ${dr}$ treats disease ${ds'}$
# * ${?dr}$ CtD ${?ds}$ ^ ${?ds}$ DrD ${?ds'}$ => ${?dr}$ CtD ${?ds'}$
# +
from snorkel.labeling import labeling_function
def rule1_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
#print (drug, disease)
# search all indications treating the same disease
# if the drug that is similar to the one in the known indication,
# return 'INDICATION'
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
r2 = relation_emb[relation2id['Hetionet::DrD::Disease:Disease']]
a = node_emb[dr_idx]
b = node_emb[ds_idx]
similarEntity = word_vectors.similar_by_vector(a + r1 + r2)
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# -
# ### Translation based probablistic rule as labeling function
# $if (h, r, t) => h + r = t $, vector norm of difference between head entity plus relation and tail entity in embedding space should be zero
# * we can simply assess the correctness of a triple (h, r, t) by
# checking the plausibility score of the embedding vectors of h, r and t.
@labeling_function()
def transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
#print (drug, disease)
# search all indications treating the same disease
# if the drug that is similar to the one in the known indication,
# return 'INDICATION'
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r), topn=10)
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
relation2id
# ### Probablistic Rule using TransE Embedding:
# #### Rule1:
# If drug $?dr$ treats disease $?f$ and disease $?f$ resembles disease $?ds$ then drug $?dr$ treats $?ds$
#
# $$dr + r1 = f, f + r2 = ds => dr + r1 + r2 = ds, $$
# $$r1= treats , r2= resembles$$
# $$ sim = cosine (dr, ds) $$
#
#
@labeling_function()
def rule1_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
#print (drug, disease)
# search all indications treating the same disease
# if the drug that is similar to the one in the known indication,
# return 'INDICATION'
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
r2 = relation_emb[relation2id['Hetionet::DrD::Disease:Disease']]
a = node_emb[dr_idx]
b = node_emb[ds_idx]
similarEntity = word_vectors.similar_by_vector(a + r1 + r2)
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# #### Rule 2:
# If drug $?dr$ binds gene $?g$ and disease $?ds$ associates gene $?g$ then drug $?dr$ treats $?ds$
# Compound–binds–Gene–associates–Disease => Compound–treats–Disease
@labeling_function()
def rule2_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
#print (drug, disease)
# search all indications treating the same disease
# if the drug that is similar to the one in the known indication,
# return 'INDICATION'
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CbG::Compound:Gene']]
r2 = relation_emb[relation2id['Hetionet::DaG::Disease:Gene']]
a = node_emb[dr_idx]
similarEntity = word_vectors.similar_by_vector(a + r1 -r2)
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# #### Rule 3:
# Compound–resemble–Compound–Treat–Disease => Compound–treats–Disease
@labeling_function()
def rule3_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
#print (drug, disease)
# search all indications treating the same disease
# if the drug that is similar to the one in the known indication,
# return 'INDICATION'
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
r2 = relation_emb[relation2id['Hetionet::CrC::Compound:Compound']]
a = node_emb[dr_idx]
b = node_emb[ds_idx]
similarEntity = word_vectors.similar_by_vector(a + r2 + r1)
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# #### Rule 4
# Compound-include(-1)-Pharmacologic Class-include-Compound--treat-disease
# +
@labeling_function()
def rule4_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::PCiC::Pharmacologic Class:Compound']]
r2 = relation_emb[relation2id['Hetionet::PCiC::Pharmacologic Class:Compound']]
r3 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] -r1 + r2 +r3))
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# -
# #### Rule 5
# Compound-resembles-Compound-resembles-Compound-treats-Disease
@labeling_function()
def rule5_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CrC::Compound:Compound']]
r2 = relation_emb[relation2id['Hetionet::CrC::Compound:Compound']]
r3 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r1 + r2 +r3))
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# #### Rule 6
# Compound-palliates-Disease-palliates(-1)-Compound-treats-Disease
#
@labeling_function()
def rule6_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CpD::Compound:Disease']]
r2 = relation_emb[relation2id['Hetionet::CpD::Compound:Disease']]
r3 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r1 - r2 +r3))
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# #### Rule 7
# Compound-binds-Gene-binds(-1)-Compound-treats-Disease
@labeling_function()
def rule7_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CbG::Compound:Gene']]
r2 = relation_emb[relation2id['Hetionet::CbG::Compound:Gene']]
r3 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r1 - r2 +r3))
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# #### Rule 8
# Compound-causes-Side Effect-causes(1)-Compound-treats-Disease
#
@labeling_function()
def rule8_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CcSE::Compound:Side Effect']]
r2 = relation_emb[relation2id['Hetionet::CcSE::Compound:Side Effect']]
r3 = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r1 - r2 +r3))
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# #### Rule 9
# Compound-resembles-Compound-binds-Gene-associates(-1)-Disease
#
#
# +
@labeling_function()
def rule9_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CrC::Compound:Compound']]
r2 = relation_emb[relation2id['Hetionet::CbG::Compound:Gene']]
r3 = relation_emb[relation2id['Hetionet::DaG::Disease:Gene']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r1 + r2 - r3))
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# -
# #### Rule 10
# Compound-binds-Gene-expresses(1)-Anatomy-localizes(1)-Disease
#
# cosine_similarity(dr+r1-r2-r3, ds)
@labeling_function()
def rule10_transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r1 = relation_emb[relation2id['Hetionet::CbG::Compound:Gene']]
r2 = relation_emb[relation2id['Hetionet::AeG::Anatomy:Gene']]
r3 = relation_emb[relation2id['Hetionet::DlA::Disease:Anatomy']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r1 - r2 - r3))
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
@labeling_function()
def transe_sim(x):
drug= x.DB_ID
disease = x.DO_ID
#print (drug, disease)
# search all indications treating the same disease
# if the drug that is similar to the one in the known indication,
# return 'INDICATION'
score = 0.0
if drug in entity2id and disease in entity2id:
dr_idx = entity2id[drug]
ds_idx = entity2id[disease]
r = relation_emb[relation2id['Hetionet::CtD::Compound:Disease']]
similarEntity = word_vectors.similar_by_vector((node_emb[dr_idx] + r), topn=10)
for en,sim in similarEntity:
if en == disease:
return POSITIVE
return ABSTAIN
# +
from snorkel.labeling import PandasLFApplier, LFAnalysis
lfs = [transe_sim, rule1_transe_sim, rule2_transe_sim,rule3_transe_sim,rule4_transe_sim, rule5_transe_sim, rule6_transe_sim,rule7_transe_sim,rule8_transe_sim,rule9_transe_sim, rule10_transe_sim]
applier = PandasLFApplier(lfs=lfs)
L_train = applier.apply(df=df_train)
# +
from snorkel.labeling.model import LabelModel
label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train=L_train, n_epochs=500, log_freq=100, seed=123)
# -
preds_train = label_model.predict(L=L_train)
probs_train =label_model.predict_proba(L=L_train)
probs_train[:, 1]
sum(probs_train[:, 1] > 0.5)
df_train['prob'] = probs_train[:, 1]
df_het_test['label'] =1
from snorkel.labeling import PandasLFApplier, LFAnalysis
applier = PandasLFApplier(lfs)
L_dev = applier.apply(df_het_test)
LFAnalysis(L_dev, lfs).lf_summary(df_het_test.label.values)
df_rank = {}
df_train.index = df_train.DB_ID
for i,drug in enumerate(df_train.DB_ID.unique()):
x = df_train[df_train.DB_ID==drug]
x['Rank'] = x['prob'].rank(method='first',ascending=False)
x.index = x.DO_ID
df_rank[drug]= x['Rank'].to_dict()
pos_pair_dict = {}
for i,row in df_het_test.iterrows():
if row['DB_ID'] not in pos_pair_dict:
pos_pair_dict[row['DB_ID']] =[]
pos_pair_dict[row['DB_ID']].append( row['DO_ID'])
list(pos_pair_dict.items())[:10]
def mrr_map_new(pos_pair_dict, df_rank):
rank_scores = []
ave_p = []
in_top_5 = 0
in_top_10 = 0
print('number of positives: ',len(pos_pair_dict.keys()))
for fromt in pos_pair_dict.keys():
relevant_ranks = []
min_rank = 9999999999999999
for asdf in pos_pair_dict[fromt]:
rank = df_rank[fromt][asdf]
relevant_ranks.append(rank)
if rank < min_rank:
min_rank = rank
rank = min_rank
rank_scores.append(1.0/rank)
if min_rank <= 5:
in_top_5 += 1
if min_rank <= 10:
in_top_10 += 1
precisions = []
for rank in relevant_ranks:
good_docs = len([r for r in relevant_ranks if r <= rank])
precisions.append(good_docs/rank)
if len(precisions) == 0:
precisions = [0]
ave_p.append(np.mean(precisions))
print('mean average precision: ', np.mean(ave_p))
print('mean reciprocal rank: ', np.mean(rank_scores))
print('recallrate at 5: ', in_top_5/len(pos_pair_dict.keys()))
print('recallrate at 10: ', in_top_10/len(pos_pair_dict.keys()))
print('formatted')
print(round(np.mean(ave_p), 3),' & ',round(np.mean(rank_scores), 3),' & ',round(in_top_5/len(pos_pair_dict.keys()), 3),' & ',round(in_top_10/len(pos_pair_dict.keys()), 3))
return round(np.mean(ave_p), 3)
mrr_map_new(pos_pair_dict, df_rank)
| notebook/Probablistic Rules-Snorkel-DrugRepurposing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Análise de um oscilador com *N* graus de liberdade sujeito a uma excitação dinâmica aplicada nalguns graus de liberdade
#
# ## Formulação do problema
#
# Equação de equilíbrio dinâmico em coordenadas reais de um oscilador com *N* graus de liberdade sujeito a uma excitação dinâmica:
#
# $$[M] \times \{\ddot u\} + [C] \times \{\dot u\} + [K] \times \{u\} = \{s\} \cdot p(t)$$
#
# O vetor $\{s\}$ contém a descrição espacial da excitação cuja variação ao longo do tempo é definida através do termo $p(t)$. Admite-se no presente caso, sem perda de generalidade, que a excitação é uma função harmónica ($p(t) = \sin (\omega \cdot t)$).
#
# Adotando a seguinte transformação de coordenadas reais em coordenadas modais:
#
# $$\{u\} = [\Phi] \times \{q\} = \sum_{n=1}^{N} \{\phi_n\} \cdot q_n$$
#
# é possível converter as *N* equações de equilíbrio dinâmico para coordenadas modais. Para o modo *n* obtém-se:
#
# $$M_n \cdot \{\ddot q_n\} + C_n \cdot \{\dot q_n\} + K_n \cdot \{q_n\} = \{\phi_n\}^T \times \{s\} \cdot p(t)$$
#
# em que:
#
# $M_n = \{\phi_n\}^T \times [M] \times \{\phi_n\}$ - Massa modal
#
# $C_n = \{\phi_n\}^T \times [C] \times \{\phi_n\} = 2 \cdot \zeta_n \cdot M_n \cdot \omega_n$ - Amortecimento modal clássico
#
# $K_n = \{\phi_n\}^T \times [K] \times \{\phi_n\}$ - Rigidez modal
#
# $\zeta_n$ é o coeficiente de amortecimento modal e $\omega_n$ é a frequência de vibração modal.
#
# ## Solução claśsica
#
# Expansão modal do vetor de excitação:
#
# $$\{s\} = \sum_{n=1}^{N} \{s_n\} = \sum_{n=1}^{N} \Gamma_n \cdot [M] \times \{\phi_n\}$$
#
# em que $\Gamma_n$ é o fator de participação do modo *n* dado por:
#
# $$\Gamma_n = \frac{\{\phi_n\}^T \times \{s\}}{M_n}$$
#
# A equação de equilíbrio dinâmico em coordenadas modais passa a escrever-se:
#
# $$\ddot q_n + 2 \cdot \zeta_n \cdot \omega_n \cdot \dot q_n + \omega_n^2 \cdot q_n = \Gamma_n \cdot p(t)$$
#
# Para a excitação harmónica acima definida, a resposta modal em regime estacionário é dada por:
#
# $$q_n(t) = \frac{\Gamma_n}{K_n} \cdot \frac{(1 - \beta_n^2) \cdot \sin (\omega \cdot t) - 2 \cdot \zeta_n \cdot \beta_n \cdot \cos (\omega \cdot t)}{(1- \beta_n^2)^2 + (2 \cdot \zeta_n \cdot \beta_n)^2}$$
#
# em que $\beta_n = \frac{\omega}{\omega_n}$ é a relação entre a frequência de excitação e a frequência de vibração modal. Naquela expressão o termo $\frac{\Gamma_n}{K_n} = q_{0n}$ corresponde à amplitude em deslocamento da resposta modal, dado que o outro fator é adimensional. De facto:
#
# $$\lim_{\beta \to 0} q_n(t) = \frac{\Gamma_n}{K_n} \cdot \sin (\omega \cdot t) = q_{0n} \cdot \sin (\omega \cdot t)$$
#
# A resposta em deslocamento do oscilador nas coordenadas reais é dada por:
#
# $$\{u(t)\} = [\Phi] \times \{q(t)\} = \sum_{n=1}^{N} \{\phi_n\} \cdot q_n(t)$$
#
# ## Solução alternativa
#
# A excitação aplicada produz deslocamentos estáticos dados por:
#
# $$\{u_{st}\} = [K^{-1}] \times \{s\}$$
#
# Realizando a expansão modal deste vetor de deslocamentos, obtêm-se as coordenadas modais estáticas correspondentes:
#
# $$q_{n,st} = \frac{\{\phi_n\}^T \times [M] \times \{u_{st}\}}{M_n}$$
#
# As respostas modais são então dadas em função destas coordenadas:
#
# $$q_n(t) = q_{n,st} \cdot \frac{(1 - \beta_n^2) \cdot \sin (\omega \cdot t) - 2 \cdot \zeta_n \cdot \beta_n \cdot \cos (\omega \cdot t)}{(1- \beta_n^2)^2 + (2 \cdot \zeta_n \cdot \beta_n)^2}$$
#
# e a resposta em deslocamento do oscilador nas coordenadas reais, à semelhança do caso anterior, é dada por:
#
# $$\{u\} = [\Phi] \times \{q(t)\} = \sum_{n=1}^{N} \{\phi_n\} \cdot q_n(t)$$
#
# ## Comparação de soluções
#
# Conforme se observa, as duas soluções diferem apenas no termo multiplicativo da resposta modal pelo que serão equivalentes se e só se $q_{0n} = q_{n,st}$, isto é:
#
# $$\frac{\{\phi_n\}^T \times \{s\}}{M_n \cdot K_n} = \frac{\{\phi_n\}^T \times [M] \times [K^{-1}] \times \{s\}}{M_n}$$
#
# Eliminando os fatores comuns, tem-se:
#
# $$\frac{1}{K_n} = [M] \times [K^{-1}]$$
# $$\frac{1}{K_n} \cdot [K] = [M]$$
#
# Pré-multiplicando por $\{\phi_n\}^T$ e pós-multiplicando por $\{\phi_n\}$ ambos os lados da igualdade:
#
# $$\frac{1}{K_n} \cdot \{\phi_n\}^T \times [K] \times \{\phi_n\} = \{\phi_n\}^T \times [M] \times \{\phi_n\}$$
# $$\frac{K_n}{K_n} = M_n$$
# $$1 = M_n$$
#
# Conclui-se assim que as duas soluções serão equivalentes se os modos de vibração forem normalizados relativamente à massa. Este é um aspeto que habitualmente não é tratado nos livros de dinâmica de estruturas.
#
# # Exemplo de aplicação
#
# ## Laboratório computacional
import sys
import math
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# %matplotlib inline
print('System: {}'.format(sys.version))
for package in (np, mpl):
print('Package: {} {}'.format(package.__name__, package.__version__))
# ## Dados
#
# Oscilador com três graus de liberdade:
MM = np.matrix(np.diag([1.,2.,3.]))
KK = np.matrix([[2,-1,0],[-1,2,-1],[0,-1,1]])*1000.
print(MM)
print(KK)
# Análise modal:
W2, F1 = np.linalg.eig(KK.I@MM)
print(W2)
print(F1)
# Ordenação dos modos:
ix = np.argsort(W2)[::-1]
W2 = 1./W2[ix]
F1 = F1[:,ix]
Wn = np.sqrt(W2)
print(W2)
print(Wn)
print(F1)
# Normalização dos modos de vibração relativamente à matriz de massa:
Fn = F1/np.sqrt(np.diag(F1.T@MM@F1))
print(Fn)
# Massa e rigidez modais:
Mn = np.diag(Fn.T@MM@Fn)
Kn = np.diag(Fn.T@KK@Fn)
print(Mn)
print(Kn)
# Excitação dinâmica:
sp = np.matrix([[0.], [1.], [0.]])
wp = 2.*np.pi*4.
print(sp)
print(wp)
tt = np.arange(1000)*0.005
pt = np.sin(wp*tt)
plt.figure()
plt.plot(tt, pt)
plt.xlabel('Tempo (s)')
plt.ylabel('Força (kN)')
plt.show()
# Função auxiliar para o cálculo da resposta modal:
def qn(amplitude, wp, beta, zn, tt):
"""Calcula a resposta modal do modo n."""
qn_t = amplitude * ((1.-beta**2)*np.sin(wp*tt)-2.*zn*beta*np.cos(wp*tt))/((1.-beta**2)**2+(2.*zn*beta)**2)
return qn_t
# ## Solução clássica
Gn = np.diag(1./Mn)@Fn.T@sp
print(Gn)
qn_t = []
plt.figure()
for n in range(3):
an = Gn[n]/Kn[n]
bn = wp/Wn[n]
q = qn(an[0,0], wp, bn, 0.05, tt)
qn_t.append(q)
plt.plot(tt, q, label='an={:.2e},bn={:.2f}'.format(an[0,0], bn))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento modal (m)')
plt.show()
plt.figure()
u_t = Fn@qn_t
for n in range(3):
plt.plot(tt, u_t[n].T, label='{:.2e}'.format(np.max(u_t[n])))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento (m)')
plt.show()
# ## Solução alternativa
ust = KK.I@sp
print(ust)
qn_st_t = []
plt.figure()
for n in range(3):
an = Fn.T[n]@MM@ust/Mn[n]
bn = wp/Wn[n]
qst = qn(an[0,0], wp, bn, 0.05, tt)
qn_st_t.append(qst)
plt.plot(tt, qst, label='an={:.2e},bn={:.2f}'.format(an[0,0], bn))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento modal (m)')
plt.show()
plt.figure()
u_t = Fn@qn_st_t
for n in range(3):
plt.plot(tt, u_t[n].T, label='{:.2e}'.format(np.max(u_t[n])))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento (m)')
plt.show()
# Conforme se pode observar, obteve-se a mesma solução pela via clássica e pela via alternativa.
#
# Este documento foi elaborado por <NAME>.
| Exemplo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import csv
import pandas as pd
source_folder = "../datasets/original/"
file1 = "greater-hyderabad-municipal-corporation-capital-expenditure.csv"
file2 = "greater-hyderabad-municipal-corporation-revenue-expenditure.csv"
dataset1 = source_folder + file1
dataset2 = source_folder + file2
capital_df = pd.read_csv(dataset1)
revenue_df = pd.read_csv(dataset2)
capital_df.columns
revenue_df.columns
capital_df[capital_df["Function"].str[:5] != "Total"].sort_values(by="2016-17 Budget Estimates", ascending=False).head(16)[['Function','Major Account Head Description', 'Detailed Account Code Description']]
revenue_df[revenue_df["Function"].str[:5] != "Total"].sort_values(by="2016-17 Budget Estimates", ascending=False).head(16)[['Function','Major Account Head Description', 'Detailed Account Code Description']]
# +
def generate_recordtype(row):
try :
if row[0][:5] == "Total":
return "total"
else:
return "record"
except TypeError:
if row[4][:5] == "Total":
return "total"
capital_df["record_type"] = capital_df.apply(lambda row: generate_recordtype(row), 1)
revenue_df["record_type"] = revenue_df.apply(lambda row: generate_recordtype(row), 1)
# -
capital_df.head()
revenue_df.head()
capital_df.to_csv("../datasets/"+ file1, index=False, na_rep="-")
revenue_df.to_csv("../datasets/"+file2, index=False , na_rep="-")
| Scripts/.ipynb_checkpoints/Expenditure - Basic Sanity-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Loading Graphs in NetworkX
# +
import networkx as nx
import numpy as np
import pandas as pd
# %matplotlib notebook
# Instantiate the graph
G1 = nx.Graph()
# add node/edge pairs
G1.add_edges_from([(0, 1),
(0, 2),
(0, 3),
(0, 5),
(1, 3),
(1, 6),
(3, 4),
(4, 5),
(4, 7),
(5, 8),
(8, 9)])
# draw the network G1
nx.draw_networkx(G1)
# -
# ### Adjacency List
# `G_adjlist.txt` is the adjaceny list representation of G1.
#
# It can be read as follows:
# * `0 1 2 3 5` $\rightarrow$ node `0` is adjacent to nodes `1, 2, 3, 5`
# * `1 3 6` $\rightarrow$ node `1` is (also) adjacent to nodes `3, 6`
# * `2` $\rightarrow$ node `2` is (also) adjacent to no new nodes
# * `3 4` $\rightarrow$ node `3` is (also) adjacent to node `4`
#
# and so on. Note that adjacencies are only accounted for once (e.g. node `2` is adjacent to node `0`, but node `0` is not listed in node `2`'s row, because that edge has already been accounted for in node `0`'s row).
# !cat G_adjlist.txt
# If we read in the adjacency list using `nx.read_adjlist`, we can see that it matches `G1`.
G2 = nx.read_adjlist('G_adjlist.txt', nodetype=int)
G2.edges()
# ### Adjacency Matrix
#
# The elements in an adjacency matrix indicate whether pairs of vertices are adjacent or not in the graph. Each node has a corresponding row and column. For example, row `0`, column `1` corresponds to the edge between node `0` and node `1`.
#
# Reading across row `0`, there is a '`1`' in columns `1`, `2`, `3`, and `5`, which indicates that node `0` is adjacent to nodes 1, 2, 3, and 5
G_mat = np.array([[0, 1, 1, 1, 0, 1, 0, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 1, 0, 1, 0, 0],
[1, 0, 0, 0, 1, 0, 0, 0, 1, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]])
G_mat
# If we convert the adjacency matrix to a networkx graph using `nx.Graph`, we can see that it matches G1.
G3 = nx.Graph(G_mat)
G3.edges()
# ### Edgelist
# The edge list format represents edge pairings in the first two columns. Additional edge attributes can be added in subsequent columns. Looking at `G_edgelist.txt` this is the same as the original graph `G1`, but now each edge has a weight.
#
# For example, from the first row, we can see the edge between nodes `0` and `1`, has a weight of `4`.
# !cat G_edgelist.txt
# Using `read_edgelist` and passing in a list of tuples with the name and type of each edge attribute will create a graph with our desired edge attributes.
# +
G4 = nx.read_edgelist('G_edgelist.txt', data=[('Weight', int)])
G4.edges(data=True)
# -
# ### Pandas DataFrame
# Graphs can also be created from pandas dataframes if they are in edge list format.
G_df = pd.read_csv('G_edgelist.txt', delim_whitespace=True,
header=None, names=['n1', 'n2', 'weight'])
G_df
G5 = nx.from_pandas_dataframe(G_df, 'n1', 'n2', edge_attr='weight')
G5.edges(data=True)
# ### Chess Example
# Now let's load in a more complex graph and perform some basic analysis on it.
#
# We will be looking at chess_graph.txt, which is a directed graph of chess games in edge list format.
# !head -5 chess_graph.txt
# Each node is a chess player, and each edge represents a game. The first column with an outgoing edge corresponds to the white player, the second column with an incoming edge corresponds to the black player.
#
# The third column, the weight of the edge, corresponds to the outcome of the game. A weight of 1 indicates white won, a 0 indicates a draw, and a -1 indicates black won.
#
# The fourth column corresponds to approximate timestamps of when the game was played.
#
# We can read in the chess graph using `read_edgelist`, and tell it to create the graph using a `nx.MultiDiGraph`.
chess = nx.read_edgelist('chess_graph.txt', data=[('outcome', int), ('timestamp', float)],
create_using=nx.MultiDiGraph())
chess.is_directed(), chess.is_multigraph()
chess.edges(data=True)
# Looking at the degree of each node, we can see how many games each person played. A dictionary is returned where each key is the player, and each value is the number of games played.
games_played = chess.degree()
games_played
# Using list comprehension, we can find which player played the most games.
# +
max_value = max(games_played.values())
max_key, = [i for i in games_played.keys() if games_played[i] == max_value]
print('player {}\n{} games'.format(max_key, max_value))
# -
# Let's use pandas to find out which players won the most games. First let's convert our graph to a DataFrame.
df = pd.DataFrame(chess.edges(data=True), columns=['white', 'black', 'outcome'])
df.head()
# Next we can use a lambda to pull out the outcome from the attributes dictionary.
df['outcome'] = df['outcome'].map(lambda x: x['outcome'])
df.head()
# To count the number of times a player won as white, we find the rows where the outcome was '1', group by the white player, and sum.
#
# To count the number of times a player won as back, we find the rows where the outcome was '-1', group by the black player, sum, and multiply by -1.
#
# The we can add these together with a fill value of 0 for those players that only played as either black or white.
won_as_white = df[df['outcome']==1].groupby('white').sum()
won_as_black = -df[df['outcome']==-1].groupby('black').sum()
win_count = won_as_white.add(won_as_black, fill_value=0)
win_count.head()
# Using `nlargest` we find that player 330 won the most games at 109.
win_count.nlargest(5, 'outcome')
| 05_Applied Social Network Analysis in Python/Week_1/Loading+Graphs+in+NetworkX.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import warnings
warnings.filterwarnings('ignore')
# # Setting Up Lambda Function for Drought Monitor
# This sets up 3 Lambda functions
# * Function for public-facing website
# * Updator for adding new SMAP data daily to the zarr-staged SMAP data
# * Private lambda function for computing the drought categories
import podpac
from podpac.managers import aws
from podpac import settings
# set logging to DEBUG to see build process
import logging
logger = logging.getLogger("podpac")
logger.setLevel(logging.DEBUG)
# # Create the Lambda function for the public-facing website
# This function restricts the types of pipelines that can be evaluated on the Lambda function. It prevent nefarious use of the general API.
# Choose the function and bucket names
settings["FUNCTION_NAME"] = "podpac-drought-monitor-lambda-world"
settings["S3_BUCKET_NAME"] = "podpac-drought-monitor-s3"
settings["FUNCTION_ROLE_NAME"] = "podpac-drought-monitor-role"
# Load the pipelines
d0 = podpac.Node.from_json(open('pipeline_d0.json').read())
d1 = podpac.Node.from_json(open('pipeline_d1.json').read())
d2 = podpac.Node.from_json(open('pipeline_d2.json').read())
d3 = podpac.Node.from_json(open('pipeline_d3.json').read())
d4 = podpac.Node.from_json(open('pipeline_d4.json').read())
cats_space = podpac.Node.from_json(open('pipeline_category_space.json').read())
cats_space_us = podpac.Node.from_json(open('pipeline_category_space_us.json').read())
cats_time = podpac.Node.from_json(open('pipeline_category_time.json').read())
smap = podpac.Node.from_json(open('pipeline_moisture_space.json').read())
smap_e_am = podpac.Node.from_json(open('pipeline_moisture_time.json').read())
sin_coords_node = podpac.algorithm.SinCoords()
smap.hash
# Find hashes for functions to restrict capabilities on public Lambda function
hashs = [d0.hash, d1.hash, d2.hash, d3.hash, d4.hash, cats_space.hash, cats_space_us.hash, cats_time.hash, smap.hash, smap_e_am.hash, sin_coords_node.hash]
hashs
hashs = [d0.hash, d1.hash, d2.hash, d3.hash, d4.hash, cats_space.hash, cats_space_us.hash, cats_time.hash, smap.hash, smap_e_am.hash, sin_coords_node.hash]
hashs
# Make the Lambda function
# make lambda node that is restricted only to this node
node = aws.Lambda(function_restrict_pipelines=hashs,
function_allow_unsafe_eval=True,
function_tags={'owner': 'mpu',
'acct': '1010115.01.003'},
function_source_bucket=settings["S3_BUCKET_NAME"],
# The podpac_dist.zip file is expected to live in the same S3 bucket as specified above. It can be omitted in most cases.
# In this case, it contains a 'settings.json' file that contains some permissions that we did not want to send with API requests.
function_source_dist_key='podpac-dm-public.zip',
function_triggers=['eval', 'APIGateway'],
function_env_variables={"S3_BUCKET_NAME": settings["S3_BUCKET_NAME"],
"PODPAC_VERSION": podpac.version.semver()
}
)
node.describe()
node.delete_function()
node.delete_api()
node.build()
node.describe()
# Now we test the Lambda function to make sure it works.
o = smap.eval(coordinates)
o.plot()
# +
coordinates = podpac.Coordinates([podpac.clinspace(20, 75, 180), podpac.clinspace(-130,-65, 180), '2019-05-19'], ['lat', 'lon', 'time'])
# this will work
node = aws.Lambda(source=d0, eval_settings=settings)
output = node.eval(coordinates)
output.plot()
pass
# -
# # Lambda function to update SMAP data zarr store
# This uses a slightly-modified `podpac_dist.zip` file. Its `handler.py` was modified according to the [UpdateSMAPData Notebook](UpdateSMAPData.ipynb)
settings["FUNCTION_NAME"] = "podpac-drought-monitor-lambda-smap-world-updater"
settings["S3_BUCKET_NAME"] = "podpac-drought-monitor-s3"
settings["FUNCTION_ROLE_NAME"] = "podpac-drought-monitor-role"
# Load the pipelines
# Make the Lambda function
# make lambda node that is restricted only to this node
updater_node = aws.Lambda(function_tags={'owner': 'mpu',
'acct': '1010115.01.003'},
function_source_bucket=settings["S3_BUCKET_NAME"],
function_source_dist_key='podpac-dm-smap-world-updater.zip',
function_env_variables={"S3_BUCKET_NAME": settings["S3_BUCKET_NAME"],
"PODPAC_VERSION": podpac.version.semver()
}
)
updater_node.describe()
# updater_node.delete_function()
updater_node.build()
# # Lambda function to compute drought categories
# This is very similar to the first Lambda function -- but this has no evaluation restrictions.
settings["FUNCTION_NAME"] = "podpac-drought-monitor-lambda-compute-stats"
settings["S3_BUCKET_NAME"] = "podpac-drought-monitor-s3"
settings["FUNCTION_ROLE_NAME"] = "podpac-drought-monitor-role"
# Load the pipelines
# Make the Lambda function
# make lambda node that is restricted only to this node
compute_node = aws.Lambda(
function_allow_unsafe_eval=True,
function_tags={'owner': 'mpu', 'acct': '1010115.01.003'},
function_source_bucket=settings["S3_BUCKET_NAME"],
function_source_dist_key='podpac_dist_stats.zip',
function_source_dependencies_key='podpac_deps.zip',
function_env_variables={
"S3_BUCKET_NAME": settings["S3_BUCKET_NAME"],
"PODPAC_VERSION": podpac.version.semver()
}
)
compute_node.describe()
compute_node.delete_function()
compute_node.build()
| notebooks/MakeLambdaFunction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: synapse_pyspark
# kernelspec:
# display_name: Synapse PySpark
# language: Python
# name: synapse_pyspark
# ---
# + [markdown] nteract={"transient": {"deleting": false}}
# # Graph API Module Example Notebook
#
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
# %run /OEA_py
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
# %run /GraphAPI_py
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
# 0) Initialize the OEA framework and modules needed.
oea = OEA()
graphapi = GraphAPI()
# + [markdown] nteract={"transient": {"deleting": false}}
# ## Using Actual Data
#
# ### 1. Processing Graph API Actual Data
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
graphapi.ingest()
# + [markdown] nteract={"transient": {"deleting": false}}
# ##
| modules/Microsoft_Data/Microsoft_Graph/notebook/GraphAPI_module_ingestion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # dictionary are immutable
# +
# # dictionary follow pattern of key and value
# dictionary_name={
# key:value,
# key_2:value_2,....
# }
# -
# ## create dictionary using { } curly brackets
d = {
"int" : [1, 2, 3],
"floats" : [1.5, 2.5, 3.5],
"char" : ['a', 'b', 'c']
}
d
# ## create dictionary using dict()
playing = dict({1:"name", 2:"address", 3:"phone_number"})
playing
my_dict = dict([(1, "orange"), (2, "mango"), (3, "rose")])
my_dict
type([(1, "orange"), (2, "mango"), (3, "rose")])
type((1, "orange"))
# ## convering list(tuple) into dictionary i.e [(), (), ()] => {, , ,}
my_dict = dict([(1, "orange"), (2, "mango"), (3, "rose")])
my_dict
from collections import OrderedDict
OrderedDict([
(4, "Four"),
(5, "Five"),
(6, "Six")
])
# # Access dictionary
#
# ## using keys
datatype = {
"int" : [1, 2, 3],
"float" : [1.5, 2.5, 3.5],
"char" : ['a', 'b', 'c']
}
datatype["int"]
datatype["float"]
datatype["string"]
# +
# using get() function
# get(key) returns value
# None if key not exist
# -
print(datatype.get("string"))
datatype
# it useful in many ways
if(datatype.get("string") == None):
print("Key doesn't exist")
else:
print(datatype.get("string"))
# add element in dictionary
datatype = {
"int" : [1, 2, 3],
"float" : [1.5, 2.5, 3.5],
"char" : ['a', 'b', 'c']
}
datatype["string"] = ["abc", "bcd", "cde"]
print(datatype)
# change value of particular keys
datatype["int"] = [4, 5, 6]
print(datatype)
# +
# remove element from dictitonary using pop()
datatype = {'int': [1, 2, 3], 'float': [1.5, 2.5, 3.5], 'char': ['a', 'b', 'c'], 'string': ['abc', 'bcd', 'cde']}
datatype.pop("int")
print(datatype)
# remove using del
del datatype["float"]
print(datatype)
# -
# remove last item from dictionary
datatpe={'float': [1.5, 2.5, 3.5], 'char': ['a', 'b', 'c'], 'string': ['abc', 'bcd', 'cde']}
datatype.popitem()
print(datatype)
## assign dictionary to another variable
datatpe={'float': [1.5, 2.5, 3.5], 'char': ['a', 'b', 'c'], 'string': ['abc', 'bcd', 'cde']}
d = datatype
print(d)
# +
# pop() already Covered
# -
| 11. Python Dictionary/1. How to create a dictionary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.6 64-bit (''py36'': conda)'
# name: python3
# ---
# + [markdown] id="pQp9R7vSNac-"
# # Calculating a rider's CoM position from Retul Vantage data
# _**Copyright (C) <NAME> 2021**_ - vantage_com.ipynb by <NAME> is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
#
# ## Background
# Determining the human body's center of mass is an important tool for analysing the biomechanics and energetics of human motion. In a biomechanical research setting, the most accurate estimates of each segment's CoM position requires placing and tracking the three-dimensional position of more than 38 markers (Tisserand et al., 2016). This method is expensive and time consuming, which is impractical for certain applications like bike fitting. Therefore another approach is to use a reduced number of markers to estimate whole body CoM position (Dumas & Wojtusch, 2017). In either case, the technique involves determining the end points of each segment and estimates of body segment inertial parameters (BSIPs). BSIPs include BSIPs can be obtained in different ways including direct measurements on cadavers or photogrammetry and medical imaging on living humans, but they are more generally estimated by regression equations (based on those measurements).
#
# The following approach uses BSIPs based on the regression equations of De Leva (1996) adjusted from the data of Zatsiorsky et al. (1990) in combination with Retul Vantage data (8 markers) to estimate the whole-body CoM position of a 16-segment rigid body biomechanical model (head with neck, upper trunk, middle trunk, lower trunk, upper arm (x2), forearm (x2), hand (x2), thigh (x2), shank (x2), and foot (x2).
#
# Beyond the limitations inherent to estimating BSIPs, the main assumptions for this approach are:
# * Retul Vantage marker placements correspond to segment end-points
# * Motion of the right and left limbs are symmetrical
# * The length of each subject's "head with neck" segment is the same within each sex
# * The alignment of the "head with neck" segment changes as a function of upper-trunk angle
# * The length of each hand is 0 mm
# * The length of each foot is from the calcaneus to the MTP joint
#
# **References**
# * <NAME>. (1996). Adjustments to Zatsiorsky-Seluyanov's segment inertia parameters. _Journal of Biomechanics_, _29_(9), 1223-1230. <https://doi.org/10.1016/0021-9290(95)00178-6>
# * <NAME>., <NAME>., <NAME>., & <NAME>. (2016). A simplified marker set to define the center of mass for stability analysis in dynamic situations. _Gait and Posture_, _48_, 64-67. <https://doi.org/10.1016/j.gaitpost.2016.04.032>
# * <NAME>., & <NAME>. (2017). Estimation of the Body Segment Inertial Parameters for the Rigid Body Biomechanical Models Used in Motion Analysis. In _Handbook of Human Motion_. <https://doi.org/10.1007/978-3-319-30808-1>
# + [markdown] id="F49MSqgOkYvw"
# ## Import libraries
# + id="FpW8F0lIkfWk"
# from google.colab import drive
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import xml.etree.ElementTree as ET
from scipy.signal import find_peaks
# + [markdown] id="R-gBsetBkfr-"
# ## Define functions
# * parse_pose_file
# * perpendicular
# * get_circle
# * vantage_com
# + id="eSBWZKT-Ikt-"
def parse_pose_file(pathToPoseFile):
tree = ET.parse(pathToPoseFile)
root = tree.getroot()
foot = []
heel = []
ankle = []
knee = []
hip = []
shoulder = []
elbow = []
wrist = []
for child in root:
if child.tag == 'title':
title = child.text
if child.tag == 'dateTime':
date = child.text
if child.tag == 'upward':
upward = child.text.split(" ")
if child.tag == 'forward':
forward = child.text.split(" ")
if child.tag == 'viewedSide':
side = child.text.split(" ")
if child.tag == 'psoX':
pso_x = child.text.split(" ")
if child.tag == 'psoZ':
pso_y = child.text.split(" ")
for frame in root.findall("./stxyz"):
if frame.tag == 'stxyz':
for point in frame:
if point.tag == 'ft':
foot.append(point.text.split(" "))
elif point.tag == 'he':
heel.append(point.text.split(" "))
elif point.tag == 'an':
ankle.append(point.text.split(" "))
elif point.tag == 'kn':
knee.append(point.text.split(" "))
elif point.tag == 'hp':
hip.append(point.text.split(" "))
elif point.tag == 'sh':
shoulder.append(point.text.split(" "))
elif point.tag == 'el':
elbow.append(point.text.split(" "))
elif point.tag == 'wr':
wrist.append(point.text.split(" "))
footDF = pd.DataFrame(foot, columns=['foot_status', 'foot_time', 'foot_x_pos', 'foot_y_pos', 'foot_z_pos'])
heelDF = pd.DataFrame(heel, columns=['heel_status', 'heel_time', 'heel_x_pos', 'heel_y_pos', 'heel_z_pos'])
ankleDF = pd.DataFrame(ankle, columns=['ankle_status', 'ankle_time', 'ankle_x_pos', 'ankle_y_pos', 'ankle_z_pos'])
kneeDF = pd.DataFrame(knee, columns=['knee_status', 'knee_time', 'knee_x_pos', 'knee_y_pos', 'knee_z_pos'])
hipDF = pd.DataFrame(hip, columns=['hip_status', 'hip_time', 'hip_x_pos', 'hip_y_pos', 'hip_z_pos'])
shoulderDF = pd.DataFrame(shoulder, columns=['shoulder_status', 'shoulder_time', 'shoulder_x_pos', 'shoulder_y_pos', 'shoulder_z_pos'])
elbowDF = pd.DataFrame(elbow, columns=['elbow_status', 'elbow_time', 'elbow_x_pos', 'elbow_y_pos', 'elbow_z_pos'])
wristDF = pd.DataFrame(wrist, columns=['wrist_status', 'wrist_time', 'wrist_x_pos', 'wrist_y_pos', 'wrist_z_pos'])
poseDF = pd.concat([footDF, heelDF, ankleDF, kneeDF, hipDF, shoulderDF, elbowDF, wristDF], axis=1)
columns_to_convert_numeric = ['foot_time', 'foot_x_pos', 'foot_y_pos', 'foot_z_pos',
'heel_time', 'heel_x_pos', 'heel_y_pos', 'heel_z_pos',
'ankle_time', 'ankle_x_pos', 'ankle_y_pos', 'ankle_z_pos',
'knee_time', 'knee_x_pos', 'knee_y_pos', 'knee_z_pos',
'hip_time', 'hip_x_pos', 'hip_y_pos', 'hip_z_pos',
'shoulder_time', 'shoulder_x_pos', 'shoulder_y_pos', 'shoulder_z_pos',
'elbow_time', 'elbow_x_pos', 'elbow_y_pos', 'elbow_z_pos',
'wrist_time', 'wrist_x_pos', 'wrist_y_pos', 'wrist_z_pos']
poseDF[columns_to_convert_numeric] = poseDF[columns_to_convert_numeric].apply(pd.to_numeric)
pose_dict = {'title': title, 'date': date, 'upward': upward, 'forward': forward, 'side': side, 'pso_x': float(pso_x[0]), 'pso_y': float(pso_y[0]), '3d_cord_DF': poseDF}
return pose_dict
# -
def perpendicular( a ) :
b = np.empty_like(a)
b[0] = -a[1]
b[1] = a[0]
return b
def get_circle(x, y):
# https://scipy-cookbook.readthedocs.io/items/Least_Squares_Circle.html
method_1 = 'algebraic'
# coordinates of the barycenter
x_m = np.mean(x)
y_m = np.mean(y)
# calculation of the reduced coordinates
u = x - x_m
v = y - y_m
# linear system defining the center (uc, vc) in reduced coordinates:
# Suu * uc + Suv * vc = (Suuu + Suvv)/2
# Suv * uc + Svv * vc = (Suuv + Svvv)/2
Suv = sum(u*v)
Suu = sum(u**2)
Svv = sum(v**2)
Suuv = sum(u**2 * v)
Suvv = sum(u * v**2)
Suuu = sum(u**3)
Svvv = sum(v**3)
# Solving the linear system
A = np.array([ [ Suu, Suv ], [Suv, Svv]])
B = np.array([ Suuu + Suvv, Svvv + Suuv ])/2.0
uc, vc = np.linalg.solve(A, B)
xc_1 = x_m + uc
yc_1 = y_m + vc
# Calcul des distances au centre (xc_1, yc_1)
Ri_1 = np.sqrt((x-xc_1)**2 + (y-yc_1)**2)
R_1 = np.mean(Ri_1)
residu_1 = sum((Ri_1-R_1)**2)
return xc_1, yc_1, R_1, residu_1
# + colab={"base_uri": "https://localhost:8080/", "height": 130} executionInfo={"elapsed": 2283, "status": "error", "timestamp": 1633043389458, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "18321452801657203336"}, "user_tz": 360} id="CJEDj0ZKNVvJ" outputId="8385a6ec-226b-4978-bb39-78099a05cb2c"
def vantage_com(**kwargs):
'''
VANTAGE_COM Calculates a cyclist's center of mass position from the Retul
Vantage motion capture output file (.pose).
Parameters
----------
file : STR
The .pose file name. Needs to exist in current directory.
sex : STR
Sex of the rider as 'male' or 'female' (default : 'male')
mass : FLOAT
Mass of rider in kilograms (default male : 78.05, default female : 64.26)
sacralAngle : FLOAT
Angle of sacrum in degrees counter-clockwise from horizontal (default : 54.0)
shoulderWidth : FLOAT
Width of shoulders in millimeters (default male : 411.0, default female : 367.0)
hipWidth : FLOAT
Width of hips in millimeters (default male : 296.0, default female : 291)
shoeMass : FLOAT
Mass of each shoe in kilograms (default : 1)
Returns
-------
com3d : NP.ARRAY (m,3)
Array (m,3) of 3-D center of mass position within the Vantage CS.
Upsampled to 200 Hz.
com2bb : NP.ARRAY (m,1)
Array (m,1) of fore-aft CoM position relative to the bicycle bottom bracket.
Upsampled to 200 Hz.
com2bbMean : FLOAT
Mean value of com2bb.
Examples
--------
com3d = vantage_com()
vantage_com(file='foo.pose')
vantage_com(file='foo.pose', shoeMass = 0.5, sex = 'female', mass = 58.0)
'''
### Evaluate inputs and initialize defaults if needed
if not kwargs.get('file'):
raise ValueError('Please specify a .pose file in the current directory')
if not kwargs.get('sex'):
kwargs.setdefault('sex','male')
if not kwargs.get('mass'):
if kwargs.get('sex') == 'male':
kwargs.setdefault('mass',78.05)
else:
kwargs.setdefault('mass',78.05)
if not kwargs.get('sacralAngle'):
kwargs.setdefault('sacralAngle',54.0)
if not kwargs.get('shoulderWidth'):
if kwargs.get('sex') == 'male':
kwargs.setdefault('shoulderWidth',411.0)
else:
kwargs.setdefault('shoulderWidth',367.0)
if not kwargs.get('hipWidth'):
if kwargs.get('sex') == 'male':
kwargs.setdefault('hipWidth',296.0)
else:
kwargs.setdefault('hipWidth',291.0)
if not kwargs.get('shoeMass'):
kwargs.setdefault('shoeMass', 1.0)
### Call parse_pose_file function
dataPose = parse_pose_file(kwargs.get('file'))
### Set Body Segment Inertial Parameters (BSIPs)
'''
Briefly, the segment mass is computed as a percentage of the body mass.
The position of the segment's CoM is computed as a percentage of the
segment length, defined as the distance between the segment end-points.
'''
nSegments = 16
if kwargs.get('sex') == 'male':
# Mass (% of whole body)
M = np.array([
0.0694, # head
0.1596, # upper trunk
0.1633, # middle trunk
0.1117, # lower trunk
0.0271, # upper arm right
0.0162, # forearm right
0.0061, # hand right
0.1416, # thigh right
0.0433, # shank right
0.0137, # foot right
0.0271, # upper arm left
0.0162, # forearm left
0.0061, # hand left
0.1416, # thigh left
0.0433, # shank left
0.0137 # foot left
]).reshape(16,1)
# Length to segment CoM (% from proximal end-point)
L = np.array([
0.5002, # head
0.2999, # upper trunk
0.4502, # middle trunk
0.6115, # lower trunk
0.5772, # upper arm right
0.4608, # forearm right
0.7948, # hand right
0.4095, # thigh right
0.4459, # shank right
0.4415, # foot right
0.5772, # upper arm left
0.4608, # forearm left
0.7948, # hand left
0.4095, # thigh left
0.4459, # shank left
0.4415 # foot left
]).reshape(16,1)
elif kwargs.get('sex') == 'female':
# Mass (% of whole body)
M = np.array([
0.0668, # head
0.1545, # upper trunk
0.1465, # middle trunk
0.1247, # lower trunk
0.0255, # upper arm right
0.0138, # forearm right
0.0056, # hand right
0.1478, # thigh right
0.0481, # shank right
0.0129, # foot right
0.0255, # upper arm left
0.0138, # forearm left
0.0056, # hand left
0.1478, # thigh left
0.0481, # shank left
0.0129, # foot left
]).reshape(16,1)
# Length to segment CoM (% from proximal end-point)
L = np.array([
0.4841, # head
0.2077, # upper trunk
0.4512, # middle trunk
0.4920, # lower trunk
0.5754, # upper arm right
0.4592, # forearm right
0.7534, # hand right
0.3612, # thigh right
0.4416, # shank right
0.4014, # foot right
0.5754, # upper arm left
0.4559, # forearm left
0.7474, # hand left
0.3612, # thigh left
0.4416, # shank left
0.4014 # foot left
]).reshape(16,1)
### Add shoe mass to each foot
relativeShoeMass = kwargs.get('shoeMass') / kwargs.get('mass')
M = M - (relativeShoeMass / nSegments)
M[9] = M[9] + relativeShoeMass
M[15] = M[9]
### Upsample and synchronize time-series data
'''
Vantage markers are initialized asynchronously. Upsampling the data from
18 Hz to 200 Hz decreases the absolute time delay in initialization
between markers. The wrist marker is initialized last. Therefore, clip all
other marker signals collected prior to the first frame of the wrist marker.
'''
markerList = ['foot','heel','ankle','knee','hip','shoulder','elbow','wrist']
nMarkers = len(markerList)
df = dataPose.get('3d_cord_DF').copy()
nFrames = len(df)
# Create new sample points based on when wrist marker is initialized
tTot = df['foot_time'].max() - df['wrist_time'].min()
nSamples = int(tTot * 200)
xVals = np.linspace(df['wrist_time'].min(), df['foot_time'].max(), nSamples)
d = {}
for i in range(nMarkers):
marker = markerList[i]
t = df[marker + '_time']
y = t
yInterp = np.interp(xVals,t,y)
d[marker + '_time'] = yInterp
y = df[marker + '_x_pos']
yInterp = np.interp(xVals,t,y)
d[marker + '_x_pos'] = yInterp
y = df[marker + '_y_pos']
yInterp = np.interp(xVals,t,y)
d[marker + '_y_pos'] = yInterp
y = df[marker + '_z_pos']
yInterp = np.interp(xVals,t,y)
d[marker + '_z_pos'] = yInterp
### Create out-of-phase contralateral markers
'''
Phase-shift ipsilateral markers by finding the half-cycle period of the
x-coordinate of the meta-tarsal marker ("ft") signal. This should be a
good estimate of the time taken to complete a half crank cycle.
'''
# Find peaks in signal
peaks,_ = find_peaks(d['foot_x_pos'],distance = 60)
# Calculate phase shift as half width of wavelength
waveLength = 0
for i in range(0,len(peaks)-1):
waveLength += peaks[i+1] - peaks[i]
waveLengthMean = waveLength / (len(peaks)-1)
phaseShift = int(waveLengthMean/2)
for i in range(nMarkers):
marker = markerList[i]
signalTime = d[marker + '_time']
signalX = d[marker + '_x_pos']
signalY = d[marker + '_y_pos']
signalZ = d[marker + '_z_pos']
if dataPose['side'] == ['R']:
d[marker + '_time'] = signalTime[0:len(signalX)-phaseShift]
d[marker + '_R_x_pos'] = signalX[0:len(signalX)-phaseShift]
d[marker + '_x_pos'] = signalX[phaseShift-1:-1]
d[marker + '_R_y_pos'] = signalY[0:len(signalY)-phaseShift]
d[marker + '_y_pos'] = signalY[phaseShift-1:-1]
d[marker + '_R_z_pos'] = signalZ[0:len(signalZ)-phaseShift]
d[marker + '_z_pos'] = signalZ[phaseShift-1:-1]
elif dataPose['side'] == ['L']:
d[marker + '_time'] = signalTime[0:len(signalX)-phaseShift]
d[marker + '_R_x_pos'] = signalX[phaseShift-1:-1]
d[marker + '_x_pos'] = signalX[0:len(signalX)-phaseShift]
d[marker + '_R_y_pos'] = signalY[phaseShift-1:-1]
d[marker + '_y_pos'] = signalY[0:len(signalY)-phaseShift]
d[marker + '_R_z_pos'] = signalZ[phaseShift-1:-1]
d[marker + '_z_pos'] = signalZ[0:len(signalZ)-phaseShift]
df200Hz = pd.DataFrame(d)
nSamples = len(df200Hz)
### Create estimate of lower trunk length
'''
Estimate trunk length as the euclidean distance between shoulder and hip XY
position. Use BSIPs from De Leva (1996) for the distribution of trunk length.
'''
trunkLength = []
for i in range(nSamples):
shoulderXy = np.array((df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_y_pos'][i]))
hipXy = np.array((df200Hz['hip_x_pos'][i], df200Hz['hip_y_pos'][i]))
trunkLength.append(np.linalg.norm(shoulderXy-hipXy))
trunkLength = np.array(trunkLength)
if kwargs.get('sex') == 'male':
lowerTrunkLength = trunkLength * (145.7 / 531.9)
elif kwargs.get('sex') == 'female':
lowerTrunkLength = trunkLength * (181.5 / 529.9)
### Create virtual marker at proximal endpoint of lower trunk
lowerTrunkLengthX = np.cos(np.deg2rad(kwargs.get('sacralAngle'))) * lowerTrunkLength
lowerTrunkLengthY = np.sin(np.deg2rad(kwargs.get('sacralAngle'))) * lowerTrunkLength
if dataPose['side'] == ['R']:
df200Hz['lowerTrunk_x_pos'] = df200Hz['hip_R_x_pos'] - lowerTrunkLengthX
df200Hz['lowerTrunk_y_pos'] = df200Hz['hip_R_y_pos'] - lowerTrunkLengthY
df200Hz['lowerTrunk_z_pos'] = df200Hz['hip_R_z_pos']
elif dataPose['side'] == ['L']:
df200Hz['lowerTrunk_x_pos'] = df200Hz['hip_x_pos'] + lowerTrunkLengthX
df200Hz['lowerTrunk_y_pos'] = df200Hz['hip_y_pos'] - lowerTrunkLengthY
df200Hz['lowerTrunk_z_pos'] = df200Hz['hip_z_pos']
### Create estimate of head and middle trunk
'''
Use BSIPs from De Leva (1996) to estimate head length. Estimate middle trunk
length as a portion of the residual length from lower trunk marker to shoulder.
'''
residualTrunkLength = []
residualTrunkAngle = []
for i in range(nSamples):
shoulderXy = np.array((df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_y_pos'][i]))
lowerTrunkXy = np.array((df200Hz['lowerTrunk_x_pos'][i], df200Hz['lowerTrunk_y_pos'][i]))
residualTrunkLength.append(np.linalg.norm(shoulderXy-lowerTrunkXy))
residualTrunkAngle.append(np.arctan((shoulderXy[1] - lowerTrunkXy[1]) / (shoulderXy[0]-lowerTrunkXy[0])))
residualTrunkLength = np.array(residualTrunkLength)
residualTrunkAngle = np.array(residualTrunkAngle)
if kwargs.get('sex') == 'male':
headLength = 242.9
middleTrunkLength = residualTrunkLength * (215.5 / (170.7 + 215.5))
elif kwargs.get('sex') == 'female':
headLength = 243.7
middleTrunkLength = residualTrunkLength * (205.3 / (142.5 + 205.3))
### Estimate head angle based on residual trunk angle
'''
Linear function of residual trunk angle. Predicting that the head angle
moves from 55 to 80 deg. relative to horizontal as the trunk moves from 0
to 90 deg. (vertical).
'''
if dataPose['side'] == ['R']:
headAngle = 0.5 * residualTrunkAngle + 0.95993
elif dataPose['side'] == ['L']:
headAngle = 0.27778 * residualTrunkAngle + 1.309
### Create virtual markers at the proximal end of the head and middle trunk
'''
Estimate the position of vertex of head by adding head length to shoulder
position and using residual trunk angle. Estimate the position of the
proximal end of the middle trunk by adding middle trunk length to lower
trunk marker at residual trunk angle.
'''
middleTrunkLengthX = np.cos(residualTrunkAngle) * middleTrunkLength
middleTrunkLengthY = np.sin(residualTrunkAngle) * middleTrunkLength
headLengthX = np.cos(headAngle) * headLength
headLengthY = np.sin(headAngle) * headLength
if dataPose['side'] == ['R']:
df200Hz['head_x_pos'] = df200Hz['shoulder_R_x_pos'] - headLengthX
df200Hz['head_y_pos'] = df200Hz['shoulder_R_y_pos'] - headLengthY
df200Hz['head_z_pos'] = df200Hz['shoulder_R_z_pos']
df200Hz['middleTrunk_x_pos'] = df200Hz['lowerTrunk_x_pos'] - middleTrunkLengthX
df200Hz['middleTrunk_y_pos'] = df200Hz['lowerTrunk_y_pos'] - middleTrunkLengthY
df200Hz['middleTrunk_z_pos'] = df200Hz['lowerTrunk_z_pos']
elif dataPose['side'] == ['L']:
df200Hz['head_x_pos'] = df200Hz['shoulder_x_pos'] + headLengthX
df200Hz['head_y_pos'] = df200Hz['shoulder_y_pos'] - headLengthY
df200Hz['head_z_pos'] = df200Hz['shoulder_z_pos']
df200Hz['middleTrunk_x_pos'] = df200Hz['lowerTrunk_x_pos'] + middleTrunkLengthX
df200Hz['middleTrunk_y_pos'] = df200Hz['lowerTrunk_y_pos'] + middleTrunkLengthY
df200Hz['middleTrunk_z_pos'] = df200Hz['lowerTrunk_z_pos']
### Adjust Z coordinates of contralateral markers
'''
If custom shoulder or hip width measurements are not provided then defaults
are taken from the CDC data
(<https://www.cdc.gov/nchs/data/series/sr_11/sr11_249.pdf>) for average
shoulder and hip breadth for males and females.
'''
if dataPose['side'] == ['R']:
df200Hz['head_z_pos'] = df200Hz['head_z_pos'] - (kwargs.get('shoulderWidth') / 2)
df200Hz['middleTrunk_z_pos'] = df200Hz['middleTrunk_z_pos'] - (kwargs.get('hipWidth') / 2)
df200Hz['lowerTrunk_z_pos'] = df200Hz['lowerTrunk_z_pos'] - (kwargs.get('hipWidth') / 2)
df200Hz['shoulder_z_pos'] = df200Hz['shoulder_R_z_pos'] - kwargs.get('shoulderWidth')
df200Hz['elbow_z_pos'] = df200Hz['shoulder_R_z_pos'] - kwargs.get('shoulderWidth') - (df200Hz['elbow_R_z_pos'] - df200Hz['shoulder_R_z_pos'])
df200Hz['wrist_z_pos'] = df200Hz['shoulder_R_z_pos'] - kwargs.get('shoulderWidth') - (df200Hz['wrist_R_z_pos'] - df200Hz['shoulder_R_z_pos'])
df200Hz['hip_z_pos'] = df200Hz['hip_R_z_pos'] - kwargs.get('hipWidth')
df200Hz['knee_z_pos'] = df200Hz['hip_R_z_pos'] - kwargs.get('hipWidth') - (df200Hz['knee_R_z_pos'] - df200Hz['hip_R_z_pos'])
df200Hz['ankle_z_pos'] = df200Hz['hip_R_z_pos'] - kwargs.get('hipWidth') - (df200Hz['ankle_R_z_pos'] - df200Hz['hip_R_z_pos'])
df200Hz['heel_z_pos'] = df200Hz['hip_R_z_pos'] - kwargs.get('hipWidth') - (df200Hz['heel_R_z_pos'] - df200Hz['hip_R_z_pos'])
df200Hz['foot_z_pos'] = df200Hz['hip_R_z_pos'] - kwargs.get('hipWidth') - (df200Hz['foot_R_z_pos'] - df200Hz['hip_R_z_pos'])
elif dataPose['side'] == ['L']:
df200Hz['head_z_pos'] = df200Hz['head_z_pos'] - (kwargs.get('shoulderWidth') / 2)
df200Hz['middleTrunk_z_pos'] = df200Hz['middleTrunk_z_pos'] - (kwargs.get('hipWidth') / 2)
df200Hz['lowerTrunk_z_pos'] = df200Hz['lowerTrunk_z_pos'] - (kwargs.get('hipWidth') / 2)
df200Hz['shoulder_R_z_pos'] = df200Hz['shoulder_z_pos'] - kwargs.get('shoulderWidth')
df200Hz['elbow_R_z_pos'] = df200Hz['shoulder_z_pos'] - kwargs.get('shoulderWidth') - (df200Hz['elbow_z_pos'] - df200Hz['shoulder_z_pos'])
df200Hz['wrist_R_z_pos'] = df200Hz['shoulder_z_pos'] - kwargs.get('shoulderWidth') - (df200Hz['wrist_z_pos'] - df200Hz['shoulder_z_pos'])
df200Hz['hip_R_z_pos'] = df200Hz['hip_z_pos'] - kwargs.get('hipWidth')
df200Hz['knee_R_z_pos'] = df200Hz['hip_z_pos'] - kwargs.get('hipWidth') - (df200Hz['knee_z_pos'] - df200Hz['hip_z_pos'])
df200Hz['ankle_R_z_pos'] = df200Hz['hip_z_pos'] - kwargs.get('hipWidth') - (df200Hz['ankle_z_pos'] - df200Hz['hip_z_pos'])
df200Hz['heel_R_z_pos'] = df200Hz['hip_z_pos'] - kwargs.get('hipWidth') - (df200Hz['heel_z_pos'] - df200Hz['hip_z_pos'])
df200Hz['foot_R_z_pos'] = df200Hz['hip_z_pos'] - kwargs.get('hipWidth') - (df200Hz['foot_z_pos'] - df200Hz['hip_z_pos'])
### Convert data to left side Vantage coordinate system
'''
Left side view X,Y,Z = right side view -X,Y,-Z.
'''
for i in range(nMarkers):
marker = markerList[i]
df200Hz[marker + '_x_pos'] = df200Hz[marker + '_x_pos'] * float(dataPose['forward'][0])
df200Hz[marker + '_R_x_pos'] = df200Hz[marker + '_R_x_pos'] * float(dataPose['forward'][0])
df200Hz[marker + '_z_pos'] = df200Hz[marker + '_z_pos'] * float(dataPose['forward'][0])
df200Hz[marker + '_R_z_pos'] = df200Hz[marker + '_R_z_pos'] * float(dataPose['forward'][0])
df200Hz['head_x_pos'] = df200Hz['head_x_pos'] * float(dataPose['forward'][0])
df200Hz['middleTrunk_x_pos'] = df200Hz['middleTrunk_x_pos'] * float(dataPose['forward'][0])
df200Hz['lowerTrunk_x_pos'] = df200Hz['lowerTrunk_x_pos'] * float(dataPose['forward'][0])
df200Hz['head_z_pos'] = df200Hz['head_z_pos'] * float(dataPose['forward'][0])
df200Hz['middleTrunk_z_pos'] = df200Hz['middleTrunk_z_pos'] * float(dataPose['forward'][0])
df200Hz['lowerTrunk_z_pos'] = df200Hz['lowerTrunk_z_pos'] * float(dataPose['forward'][0])
### Specify segment end points
'''
Use specific marker positions to define the proximal and distal end points of each segment.
'''
# Pre-allocate arrays to store segment endpoints
prox = np.zeros((nSamples, nSegments, 3))
dist = np.zeros((nSamples, nSegments, 3))
head = np.zeros((nSamples, 2, 3))
armRight = np.zeros((nSamples, 3, 3))
armLeft = np.zeros((nSamples, 3, 3))
trunk = np.zeros((nSamples, 4, 3))
legRight = np.zeros((nSamples, 6, 3))
legLeft = np.zeros((nSamples, 6, 3))
for i in range(nSamples):
# Proximal Marker Segment Proximal Endpoint
# --------------- ------- -----------------
prox[i,:,:] = [
[df200Hz['head_x_pos'][i], df200Hz['head_y_pos'][i], df200Hz['head_z_pos'][i]], # head vertex
[np.mean([df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_R_x_pos'][i]]),
np.mean([df200Hz['shoulder_y_pos'][i], df200Hz['shoulder_R_y_pos'][i]]),
np.mean([df200Hz['shoulder_z_pos'][i], df200Hz['shoulder_R_z_pos'][i]])], # upper trunk jugular notch
[df200Hz['middleTrunk_x_pos'][i], df200Hz['middleTrunk_y_pos'][i], df200Hz['middleTrunk_z_pos'][i]], # middle trunk xyphion
[df200Hz['lowerTrunk_x_pos'][i], df200Hz['lowerTrunk_y_pos'][i], df200Hz['lowerTrunk_z_pos'][i]], # lower trunk omphalion
[df200Hz['shoulder_R_x_pos'][i], df200Hz['shoulder_R_y_pos'][i], df200Hz['shoulder_R_z_pos'][i]], # upper arm right shoulder jc right
[df200Hz['elbow_R_x_pos'][i], df200Hz['elbow_R_y_pos'][i], df200Hz['elbow_R_z_pos'][i]], # forearm right elbow jc right
[df200Hz['wrist_R_x_pos'][i], df200Hz['wrist_R_y_pos'][i], df200Hz['wrist_R_z_pos'][i]], # hand right stylion right
[df200Hz['hip_R_x_pos'][i], df200Hz['hip_R_y_pos'][i], df200Hz['hip_R_z_pos'][i]], # thigh right hip jc right
[df200Hz['knee_R_x_pos'][i], df200Hz['knee_R_y_pos'][i], df200Hz['knee_R_z_pos'][i]], # shank right knee jc right
[df200Hz['heel_R_x_pos'][i], df200Hz['heel_R_y_pos'][i], df200Hz['heel_R_z_pos'][i]], # foot right calcaneus right
[df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_y_pos'][i], df200Hz['shoulder_z_pos'][i]], # upper arm left shoulder jc left
[df200Hz['elbow_x_pos'][i], df200Hz['elbow_y_pos'][i], df200Hz['elbow_z_pos'][i]], # forearm left elbow jc left
[df200Hz['wrist_x_pos'][i], df200Hz['wrist_y_pos'][i], df200Hz['wrist_z_pos'][i]], # hand left stylion left
[df200Hz['hip_x_pos'][i], df200Hz['hip_y_pos'][i], df200Hz['hip_z_pos'][i]], # thigh left hip jc left
[df200Hz['knee_x_pos'][i], df200Hz['knee_y_pos'][i], df200Hz['knee_z_pos'][i]], # shank left knee jc left
[df200Hz['heel_x_pos'][i], df200Hz['heel_y_pos'][i], df200Hz['heel_z_pos'][i]], # foot left calcaneus left
]
# Distal Marker Segment Proximal Endpoint
# --------------- ------- -----------------
dist[i,:,:] = [
[np.mean([df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_R_x_pos'][i]]),
np.mean([df200Hz['shoulder_y_pos'][i], df200Hz['shoulder_R_y_pos'][i]]),
np.mean([df200Hz['shoulder_z_pos'][i], df200Hz['shoulder_R_z_pos'][i]])], # head mid cervicale
[df200Hz['middleTrunk_x_pos'][i], df200Hz['middleTrunk_y_pos'][i], df200Hz['middleTrunk_z_pos'][i]], # upper trunk xyphion
[df200Hz['lowerTrunk_x_pos'][i], df200Hz['lowerTrunk_y_pos'][i], df200Hz['lowerTrunk_z_pos'][i]], # middle trunk omphalion
[np.mean([df200Hz['hip_x_pos'][i], df200Hz['hip_R_x_pos'][i]]),
np.mean([df200Hz['hip_y_pos'][i], df200Hz['hip_R_y_pos'][i]]),
np.mean([df200Hz['hip_z_pos'][i], df200Hz['hip_R_z_pos'][i]])], # lower trunk mid hip jc
[df200Hz['elbow_R_x_pos'][i], df200Hz['elbow_R_y_pos'][i], df200Hz['elbow_R_z_pos'][i]], # upper arm right elbow jc right
[df200Hz['wrist_R_x_pos'][i], df200Hz['wrist_R_y_pos'][i], df200Hz['wrist_R_z_pos'][i]], # forearm right stylion right
[df200Hz['wrist_R_x_pos'][i], df200Hz['wrist_R_y_pos'][i], df200Hz['wrist_R_z_pos'][i]], # hand right 3rd metacarpale right
[df200Hz['knee_R_x_pos'][i], df200Hz['knee_R_y_pos'][i], df200Hz['knee_R_z_pos'][i]], # thigh right knee jc right
[df200Hz['ankle_R_x_pos'][i], df200Hz['ankle_R_y_pos'][i], df200Hz['ankle_R_z_pos'][i]], # shank right ankle jc right
[df200Hz['foot_R_x_pos'][i], df200Hz['foot_R_y_pos'][i], df200Hz['foot_R_z_pos'][i]], # foot right toe tip right
[df200Hz['elbow_x_pos'][i], df200Hz['elbow_y_pos'][i], df200Hz['elbow_z_pos'][i]], # upper arm left elbow jc left
[df200Hz['wrist_x_pos'][i], df200Hz['wrist_y_pos'][i], df200Hz['wrist_z_pos'][i]], # forearm left stylion left
[df200Hz['wrist_x_pos'][i], df200Hz['wrist_y_pos'][i], df200Hz['wrist_z_pos'][i]], # hand left 3rd metacarpale left
[df200Hz['knee_x_pos'][i], df200Hz['knee_y_pos'][i], df200Hz['knee_z_pos'][i]], # thigh left knee jc left
[df200Hz['ankle_x_pos'][i], df200Hz['ankle_y_pos'][i], df200Hz['ankle_z_pos'][i]], # shank left ankle jc left
[df200Hz['foot_x_pos'][i], df200Hz['foot_y_pos'][i], df200Hz['foot_z_pos'][i]], # foot left toe tip left
]
# Create additional segments specifically for animation purposes
head[i,:,:] = [
[df200Hz['head_x_pos'][i], df200Hz['head_y_pos'][i], df200Hz['head_z_pos'][i]],
[np.mean([df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_R_x_pos'][i]]),
np.mean([df200Hz['shoulder_y_pos'][i], df200Hz['shoulder_R_y_pos'][i]]),
np.mean([df200Hz['shoulder_z_pos'][i], df200Hz['shoulder_R_z_pos'][i]])]
]
armRight[i,:,:] = [
[df200Hz['shoulder_R_x_pos'][i], df200Hz['shoulder_R_y_pos'][i], df200Hz['shoulder_R_z_pos'][i]],
[df200Hz['elbow_R_x_pos'][i], df200Hz['elbow_R_y_pos'][i], df200Hz['elbow_R_z_pos'][i]],
[df200Hz['wrist_R_x_pos'][i], df200Hz['wrist_R_y_pos'][i], df200Hz['wrist_R_z_pos'][i]]
]
armLeft[i,:,:] = [
[df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_y_pos'][i], df200Hz['shoulder_z_pos'][i]],
[df200Hz['elbow_x_pos'][i], df200Hz['elbow_y_pos'][i], df200Hz['elbow_z_pos'][i]],
[df200Hz['wrist_x_pos'][i], df200Hz['wrist_y_pos'][i], df200Hz['wrist_z_pos'][i]]
]
trunk[i,:,:] = [
[np.mean([df200Hz['shoulder_x_pos'][i], df200Hz['shoulder_R_x_pos'][i]]),
np.mean([df200Hz['shoulder_y_pos'][i], df200Hz['shoulder_R_y_pos'][i]]),
np.mean([df200Hz['shoulder_z_pos'][i], df200Hz['shoulder_R_z_pos'][i]])],
[df200Hz['middleTrunk_x_pos'][i], df200Hz['middleTrunk_y_pos'][i], df200Hz['middleTrunk_z_pos'][i]],
[df200Hz['lowerTrunk_x_pos'][i], df200Hz['lowerTrunk_y_pos'][i], df200Hz['lowerTrunk_z_pos'][i]],
[np.mean([df200Hz['hip_x_pos'][i], df200Hz['hip_R_x_pos'][i]]),
np.mean([df200Hz['hip_y_pos'][i], df200Hz['hip_R_y_pos'][i]]),
np.mean([df200Hz['hip_z_pos'][i], df200Hz['hip_R_z_pos'][i]])]
]
legRight[i,:,:] = [
[df200Hz['hip_R_x_pos'][i], df200Hz['hip_R_y_pos'][i], df200Hz['hip_R_z_pos'][i]],
[df200Hz['knee_R_x_pos'][i], df200Hz['knee_R_y_pos'][i], df200Hz['knee_R_z_pos'][i]],
[df200Hz['ankle_R_x_pos'][i], df200Hz['ankle_R_y_pos'][i], df200Hz['ankle_R_z_pos'][i]],
[df200Hz['heel_R_x_pos'][i], df200Hz['heel_R_y_pos'][i], df200Hz['heel_R_z_pos'][i]],
[df200Hz['foot_R_x_pos'][i], df200Hz['foot_R_y_pos'][i], df200Hz['foot_R_z_pos'][i]],
[df200Hz['ankle_R_x_pos'][i], df200Hz['ankle_R_y_pos'][i], df200Hz['ankle_R_z_pos'][i]]
]
legLeft[i,:,:] = [
[df200Hz['hip_x_pos'][i], df200Hz['hip_y_pos'][i], df200Hz['hip_z_pos'][i]],
[df200Hz['knee_x_pos'][i], df200Hz['knee_y_pos'][i], df200Hz['knee_z_pos'][i]],
[df200Hz['ankle_x_pos'][i], df200Hz['ankle_y_pos'][i], df200Hz['ankle_z_pos'][i]],
[df200Hz['heel_x_pos'][i], df200Hz['heel_y_pos'][i], df200Hz['heel_z_pos'][i]],
[df200Hz['foot_x_pos'][i], df200Hz['foot_y_pos'][i], df200Hz['foot_z_pos'][i]],
[df200Hz['ankle_x_pos'][i], df200Hz['ankle_y_pos'][i], df200Hz['ankle_z_pos'][i]]
]
### Estimate segment CoM coordinates
'''
The center of mass is an ideal point about which the torques due to body
segment weights is zero. Segment CoM coordinates will be equal to the
proximal coordinates plus the relative euclidean length to the CoM
towards the distal coordinates.
'''
segmentCoM = prox + L * (dist - prox)
### Estimate segment torque about origin
'''
Segment torque around the origin will be equal to the product of the CoM
coordinates and the relative mass of the segment.
'''
segmentTorque = segmentCoM * M
### Estimate whole body CoM coordinates
'''
Sum the torques about the origin
'''
wholeBodyCoM = np.sum(segmentTorque,axis=1)
### Estimate bottom bracket position using foot markers
'''
Get vector from foot to heel then scale it to get unit vector.
Use pedal spindle offset to create virtual marker at pedal spindle.
Fit circle to spindle data to calculate axis of rotation.
'''
foot = df200Hz[['foot_x_pos', 'foot_y_pos']].values
heel = df200Hz[['heel_x_pos', 'heel_y_pos']].values
vecFoot2heel = heel - foot
vecFoot2heelUnitLength = []
pedalSpindleCenterPos = []
for point in vecFoot2heel:
vecFoot2heelUnitLength.append(point / np.linalg.norm(point))
vecFoot2heelUnitLength = np.asanyarray(vecFoot2heelUnitLength)
for idx, point in enumerate(foot):
pedalSpindleCenterPos.append(point + vecFoot2heelUnitLength[idx]*dataPose['pso_x'] + perpendicular(vecFoot2heelUnitLength[idx])*dataPose['pso_y'])
pedalSpindleCenterPos = np.array(pedalSpindleCenterPos)
bbX, bbY, crankLength, _ = get_circle(pedalSpindleCenterPos[:,0], pedalSpindleCenterPos[:,1])
bottomBracketPos = np.array([bbX,bbY])
### Estimate Com relative to bottom bracket (anterior-posterior)
'''
Estimate the anterior-posterior position of the CoM relative
to the bottom bracket.
'''
com3d = np.squeeze(wholeBodyCoM)
com2bb = com3d[:,0] - bottomBracketPos[0]
com2bbMean = com2bb.mean()
### Plot initial rider and CoM position
'''
Coordinate system changes depending on which side the Vantage data is filmed from.
For example:
* Vantage system (Left) (X,Y,Z) = Vantage system (Right) (-X,Y,-Z)
* Vantage system (Left) (X,Y,Z) = MATLAB coordinate system (-Y,-Z,X)
* Vantage system (Right) (X,Y,Z) = MATLAB coordinate system (Y,-Z,-X)
'''
# Subplot 1: 3D Rider
color1 = "k"
color2 = "r"
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
p1 = ax.plot(head[0,:,2], -head[0,:,0], -head[0,:,1], "-o", c=color1, mfc=color1, ms=4)
p2 = ax.plot(armRight[0,:,2], -armRight[0,:,0], -armRight[0,:,1], "-o", c=color2, mfc=color2, ms=4)
p3 = ax.plot(armLeft[0,:,2], -armLeft[0,:,0], -armLeft[0,:,1], "-o", c=color1, mfc=color1, ms=4)
p4 = ax.plot(trunk[0,:,2], -trunk[0,:,0], -trunk[0,:,1], "-o", c=color1, mfc=color1, ms=4)
p5 = ax.plot(legRight[0,:,2], -legRight[0,:,0], -legRight[0,:,1], "-o", c=color2, mfc=color2, ms=4)
p6 = ax.plot(legLeft[0,:,2], -legLeft[0,:,0], -legLeft[0,:,1], "-o", c=color1, mfc=color1, ms=4)
p7 = ax.scatter(wholeBodyCoM[0,2], -wholeBodyCoM[0,0], -wholeBodyCoM[0,1], s=50, c="g")
p8 = ax.scatter(wholeBodyCoM[0,2], -bottomBracketPos[0], -bottomBracketPos[1], s=40, c=color1)
xLim = ax.get_xlim()
xRange = np.diff(xLim)
yLim = ax.get_ylim()
yRange = np.diff(yLim)
zLim = ax.get_zlim()
zRange = np.diff(zLim)
ax.set_box_aspect((xRange[0],yRange[0],zRange[0]), zoom=1)
ax.view_init(elev=0, azim=180)
return com3d, com2bb, com2bbMean, df200Hz, bottomBracketPos
# + [markdown] id="JgVz-XuduhBD"
# ## Mount drive (if needed)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 27967, "status": "ok", "timestamp": 1633039436050, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "18321452801657203336"}, "user_tz": 360} id="G1TYT1j9ui-Y" outputId="680f1dc5-152a-4d1e-b835-14c96d84edb9"
# drive.mount('/content/drive')
# + [markdown] id="XC7f2FF0l0ax"
# ## Set directory paths
# + id="znvgEW9fl3th"
# expPath = '/content/drive/MyDrive/projects/vantage-com'
expPath = '/Users/rosswilkinson/My Drive/projects/vantage-com'
codPath = expPath + '/code'
datPath = expPath + '/data'
docPath = expPath + '/docs'
# + [markdown] id="4XIAFQV1l4Eo"
# ## Run vantage_com function
# + id="r987iGBUl7Th"
# %matplotlib qt
com3d, com2bb, com2bbMean, df200Hz, bottomBracketPos = vantage_com(
file = datPath + '/COM Pose files/Jason Pose files/hoods 2 (Right).pose')
print('Mean CoM to BB distance (X) = ', com2bbMean)
print('Bottom Bracket Position (XY) = ', bottomBracketPos)
print('Mean CoM Position (XYZ) = ', com3d.mean(0))
# -
# ## SCRATCH CELL
#
| code/vantage_com.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# +
# !pip uninstall -y s3fs
# !pip install s3fs==0.4.0
import s3fs
assert(s3fs.__version__ == "0.4.0")
bucket_name = "ucb-mids-wall-e-andy-test"
fs = s3fs.S3FileSystem(anon=False, key='', secret='')
print(fs.ls(bucket_name))
import sys
import urllib.request
# +
for fname in fs.ls(bucket_name + "/OpenStreetCam/openstreetcam_data_raw/"):
with fs.open(fname, 'rb') as f_in:
for raw_line in f_in:
line = raw_line.decode("utf-8")
split_line = line.split("\t")
track_id, image_id, url = split_line[0], split_line[1], split_line[-1]
target_name = bucket_name + "/OpenStreetCam/openstreetcam_data_images/{}_{}.jpg".format(track_id, image_id)
print(target_name)
if fs.exists(target_name):
print("skipped")
continue
f = urllib.request.urlopen(url, timeout=1800)
imbytes = f.read()
if not imbytes is None:
fs.touch(target_name)
f_out = fs.open(target_name, "wb")
f_out.write(imbytes)
f_out.close()
print("downloaded")
f_in.close()
# -
| DownloadOpenStreetCam.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # CA Analysis $k=5$
# This is my second attempt to look at the CA with $k=5$. this time i've
# sampled a bit more evenly and hopefully its all good baby
# +
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from scipy.stats import spearmanr
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
rules = pd.read_csv('../data/k5/sampled_rules.csv', index_col=0)
ipm_full = pd.read_csv('../data/k5/stats/ipm_synergy_bias.csv', index_col=0)
#ipm = ipm_full.merge(rules, on='rule').dropna()
ipm = ipm_full.dropna()
# -
# ## Synergy Bias Distribution
# Last time our samples were highly skewed toward high synergy bias. Is this
# Still true? Our sampling still isnt perfect.
print('# Samples with valid B_syn: ', ipm.shape[0])
sns.histplot(ipm['B_syn'])
plt.xlabel('Synergy Bias')
plt.savefig('../plots/k5/bsyn_hist.png')
plt.savefig('../plots/k5/bsyn_hist.pdf')
plt.show()
# Still skewed but thats ok maybe hopefully
# ## Effective Connectivity
# what we really want to know about is how effective connectivity compares
# to synergy bias. In ECA we get a strong relationship but it vanished with the
# $k=5$ with the older sampling.
#
# For the sake of exploration we'll start with a distribution of effective
# connectivities. I think I expect this to have a peak somewhere in the upper
# half of the range
cana = pd.read_csv('../data/k5/stats/k5_cana.csv', index_col=0)
ipm_cana = ipm.merge(cana, on='rule')
ipm_cana['ke*'] = 1 - ipm_cana['kr*']
sns.histplot(ipm_cana['ke*'], kde=True)
plt.savefig('../plots/k5/ke_hist.pdf')
plt.savefig('../plots/k5/ke_hist.png')
# ## $k_e^*$ and $B_{syn}$
# This comparison is really why we're here
# +
print(spearmanr(ipm_cana['ke*'], ipm_cana['B_syn']))
sns.scatterplot(x='B_syn', y='ke*', hue='mutual_info', data=ipm_cana)
plt.savefig('../plots/k5/ke_vs_bsyn.png')
plt.savefig('../plots/k5/ke_vs_bsyn.pdf')
plt.show()
# -
# Its a weird shape and the mutual information doesn't really seem to show
# a pattern in in terms of where in this space it deviates from 1. Let's take a
# look at the distribution before we move on to get a better sense of what is
# going on with it.
# ## MI distribution
#
# I might need to know how mutual information is distributed
# so lets take a look.
sns.histplot(ipm['mutual_info'])
plt.xlabel(r'$I({l_2^{t-1}, l_1^{t-1}, c^{t-1}, r_1^{t-1}, r_2^{t-1}}:c^t)$')
plt.savefig('../plots/k5/mi_hist.pdf')
plt.savefig('../plots/k5/mi_hist.png')
plt.show()
#
# thats not super helpful although its pretty clear that 'deviates from 1' ia
# the right way to think about it. I'm not sure if that makes sense or not.
# I would think that we should either always have a full bit or rarely have a
# a full bit given that this is a deterministic but often chaotic system and im
# estimating probabilities for the joint states. Maybe thats just it, my
# estimates aren't that good and I should ignore MI??
#
# ## Regression
#
# Ok so correlates (spearman's r) can we do regression? It looks like OLS
# might just work?
# set up weighted least squares linear regression
# lets get the residuals
# plot the distribution of residuals and the residuals themselves
# residuals themselves on left
# distribution
# the fit itself
# the data
# the WLS fit
# labels
# save it
# # O-information
#
# Now let's take a look at O-information to see if it reports on effective
# connectivity. We will also take a look at how well it correlates with
# redundancy in the form of $1 - B_{syn}$
# +
o_info = pd.read_csv('../data/k5/stats/o_information_new.csv', index_col=0)
ipm = ipm_cana.merge(o_info, on='rule')
# drop unsignif. values. this needs to have a multiple testing correction prob.
# for bonferoni, my p values dont have enough resolution.
sig_o_info = ipm[(ipm['p'] > 0.95) | (ipm['p'] < 0.05)][['rule', 'B_syn', 'ke*', 'o-info', 'lambda']]
# make the plot for the comparison with synergy bias
sig_o_info['B_red'] = 1 - sig_o_info['B_syn']
fig, ax = plt.subplots()
sns.scatterplot(x=sig_o_info['B_red'], y=sig_o_info['o-info'], ax=ax)
ax.set_xlabel(r'$1 - B_{syn}$')
ax.set_ylabel('O-information')
plt.savefig('../plots/k5/bsyn_oinfo.pdf')
plt.savefig('../plots/k5/bsyn_oinfo.png')
plt.show()
# lets get a spearman correlation too
print(spearmanr(sig_o_info['B_red'], sig_o_info['o-info']))
# -
# not the most impressive relationship.
#
# ## O-info and $k_e^*$
#
# the more important one anyway.
sns.scatterplot(sig_o_info['ke*'], sig_o_info['o-info'])
plt.savefig('../plots/k5/ke_oinfo.pdf')
plt.savefig('../plots/k5/ke_oinfo.png')
plt.show()
print(spearmanr(sig_o_info['ke*'], sig_o_info['o-info']))
# Uncorrelated! thats weird. it doesn't really seem like O-information is
# as useful as we might like.
#
# # Directed Information Measures
#
# Transfer entropy and active information storage tell us about when
# the past of a variable is useful for the prediction of another variable. This
# really should not work for all variables in highly canalized functions with
# low effective connectivity.
# +
directed = pd.read_csv('../data/k5/stats/directed.csv', index_col=0)
ipm_dir = ipm_cana.merge(directed, on='rule').replace(-1, np.nan)
# let's get all of the like 'same input' transfer entropy vs. redundancy pairs
input_specific = ['rule', 'r(0)', 'r(1)', 'r(2)', 'r(3)', 'r(4)',
'0->', '1->', 'ais', '3->', '4->']
rename_cols = {'r(0)': 'cana_0',
'r(1)': 'cana_1',
'r(2)': 'cana_2',
'r(3)': 'cana_3',
'r(4)': 'cana_4',
'0->' : 'info_0',
'1->' : 'info_1',
'ais' : 'info_2',
'3->' : 'info_3',
'4->' : 'info_4',}
dir_info = ipm_dir[input_specific].rename(rename_cols, axis=1).dropna()
directed_long = pd.wide_to_long(dir_info, ['cana', 'info'], 'rule', 'input', sep='_')
# do the plot
plt.figure()
(sns.jointplot(x='info', y='cana', data=directed_long, kind='hist')
.set_axis_labels(r'$T_{i \rightarrow c}$ // $AIS_c$', r'$r^*(i)$'))
plt.savefig('../plots/k5/directed_cana.pdf')
plt.savefig('../plots/k5/directed_cana.png')
plt.show()
print(spearmanr(directed_long['info'], directed_long['cana']))
# -
#
# So that seems weird it implies that there must be a bunch of redundant
# information complicating these relationships.
#
# ## Directed info and $B_{syn}$
#
# Can we see evidence for this influential redundant informaiton as a negative
# correlation between $B_{syn}$ and a sum of these measures
# +
ipm_dir['info_sum'] = (ipm_dir['0->'] + ipm_dir['1->']
+ ipm_dir['3->'] + ipm_dir['4->'])
ipm_dir = ipm_dir.dropna()
plt.figure()
plt.scatter(ipm_dir['B_syn'], ipm_dir['info_sum'])
plt.xlabel(r'$B_{syn}$')
plt.ylabel(r'$\sum T_{i \rightarrow c}$')
plt.savefig('../plots/k5/tesum_bsyn.pdf')
plt.savefig('../plots/k5/tesum_bsyn.png')
plt.show()
print(spearmanr(ipm_dir['info_sum'], ipm_dir['B_syn']))
# -
# Slight negative correlation. This makes sense. I think rather than rely on
# this relationship we are probably more interested in the TE conditioned on
# all other variables.
#
# # Lambda
#
# I sampled rule tables using langton's lamdba which in a binary system is very
# similar to output entropy. Are any of the patterns simply products of lambda?
#
# ## Correlation as a function of lambda
#
# first we will look at the correlation between effective connectivity as a
# function of lambda
# +
from scipy.stats import entropy
ls = []
corrs = []
ipm_cana['entropy'] = entropy([(ipm_cana['lambda'] + 2) / 32, 1 - (ipm_cana['lambda'] + 2) / 32])
for l in ipm_cana['lambda'].unique():
ls.append(l)
ldf = ipm_cana[ipm_cana['lambda'] == l]
rp = spearmanr(ldf['B_syn'], ldf['ke*'])
if rp.pvalue < 0.05:
corrs.append(rp.correlation)
else:
corrs.append(0)
plt.scatter(ls, corrs)
plt.xlabel(r'$\lambda$')
plt.ylabel(r'Spearman $\rho$')
plt.savefig('../plots/k5/lambda_corr.pdf')
plt.savefig('../plots/k5/lambda_corr.png')
plt.figure()
plt.scatter(ipm_cana['entropy'], ipm_cana['B_syn'])
plt.xlabel(r'$H_{out}$')
plt.ylabel(r'$B_{syn}$')
plt.savefig('../plots/k5/out_ent_bsyn.pdf')
plt.savefig('../plots/k5/out_ent_bsyn.png')
plt.show()
# -
#
# Dynamics
#
# Might have to rerun dynamics calculations for the ones I care about here but who knows.
# I think I will. Anyway we basically care about whether any of these measures
# tell us anything about the dynamics.
# +
raw_dyn = pd.read_csv('../data/k5/combined_dynamics.csv', index_col=0)
dyn_rows = []
for rule in raw_dyn['rule'].unique():
rule_rows = raw_dyn[raw_dyn['rule'] == rule]
new_row = {}
new_row['rule'] = int(rule)
new_row['mean_transient'] = np.mean(rule_rows['transient'])
new_row['se_transient'] = np.std(rule_rows['transient']) / np.sqrt(rule_rows.shape[0])
new_row['min_obs_attr'] = len(rule_rows['period'].unique())
new_row['mean_period'] = np.mean(rule_rows['period'])
new_row['se_period'] = np.std(rule_rows['period']) / np.sqrt(rule_rows.shape[0])
dyn_rows.append(new_row)
dyn = pd.DataFrame(dyn_rows)
ipm_dyn = ipm_cana.merge(dyn, on='rule')
# -
#
# ## Distribution of transients
sns.histplot(dyn['mean_transient'], log_scale=True, bins=20)
plt.savefig('../plots/k5/transient_hist.pdf')
plt.savefig('../plots/k5/transient_hist.png')
plt.show()
#
# ## dynamics and b_syn
# +
fig, ax = plt.subplots(figsize=(4,4))
ax.scatter(ipm_dyn['B_syn'], ipm_dyn['mean_transient'],
facecolors='none', edgecolors='C0')
ax.set_yscale('log')
ax.set_xlabel(r'$B_{syn}$')
ax.set_ylabel(r'Transient')
plt.ylim((.1, 10**4))
plt.tight_layout()
plt.savefig('../plots/k5/bsyn_dyn.pdf')
plt.savefig('../plots/k5/bsyn_dyn.svg')
plt.savefig('../plots/k5/bsyn_dyn.png')
plt.show()
print(spearmanr(ipm_dyn['B_syn'], ipm_dyn['mean_transient']))
# -
#
# ## dynamics and ke
fig, ax = plt.subplots(figsize=(4,4))
ax.scatter(ipm_dyn['ke*'], ipm_dyn['mean_transient'],
facecolors='none', edgecolors='C0')
ax.set_yscale('log')
ax.set_xlabel(r'$k_e^*$')
ax.set_ylabel(r'Transient')
plt.ylim((.1, 10**5))
plt.tight_layout()
plt.savefig('../plots/k5/ke_dyn.pdf')
plt.savefig('../plots/k5/ke_dyn.svg')
plt.savefig('../plots/k5/ke_dyn.png')
plt.show()
print(spearmanr(ipm_dyn['ke*'], ipm_dyn['mean_transient']))
#
# ## dynamics and output entropy
# +
# calculate the rule entropies with binary encoding
def rule_to_ent(rule: int) -> float:
n_digits = 2**5
digits = []
while True:
if rule == 0:
break
else:
digits.append(rule % 2)
rule = np.floor(rule / 2)
ons = np.sum(digits) / n_digits
return entropy([ons, 1 - ons])
dyn['entropy'] = dyn['rule'].apply(lambda x: rule_to_ent(x))
ent_vals = sorted(np.unique(dyn['entropy']))
se_periods = []
periods = []
se_transients = []
transients = []
for l in ent_vals:
ld = dyn[dyn['entropy'] == l]
periods.append(np.mean(ld['mean_period'].dropna()))
se_periods.append(np.std(ld['mean_period'].dropna() / np.sqrt(len(ld['mean_period']))))
transients.append(np.mean(ld['mean_transient'].dropna()))
se_transients.append(np.std(ld['mean_transient'].dropna() / np.sqrt(len(ld['mean_transient']))))
# convert all to numpy arrays for easy math later
se_periods = np.array(se_periods)
periods = np.array(periods)
se_transients = np.array(se_transients)
transients = np.array(transients)
print(len(periods), len(ent_vals), len(se_periods))
plt.figure(figsize=(4,4))
plt.plot(ent_vals, periods, label='Period', marker='^', mfc='white', mec='C0')
plt.fill_between(ent_vals, periods - se_periods, periods + se_periods, color='C0', alpha = 0.4)
plt.plot(ent_vals, transients, label='Transient', marker='s', mfc='white', mec='C1')
plt.fill_between(ent_vals, transients - se_transients, transients + se_transients, color='C1', alpha = 0.4)
plt.xlabel(r'$H_{out}$')
plt.ylabel(r'Timesteps')
plt.ylim((1, 10**4))
plt.legend(loc='upper left')
plt.yscale('log')
plt.tight_layout()
plt.savefig('../plots/k5/entropy_dynamics.pdf')
plt.savefig('../plots/k5/entropy_dynamics.svg')
plt.savefig('../plots/k5/entropy_dynamics.png')
plt.show()
# -
# ## relationships between ke and b syn and system dynamics
# +
print(spearmanr(ipm_cana['ke*'], ipm_cana['B_syn']))
ipm_dyn['log_period_transient'] = np.log(ipm_dyn['period_transient'])
sns.scatterplot(x='B_syn', y='ke*', hue='log_period_transient',
data=ipm_dyn, palette='Blues', alpha=0.6)
plt.ylabel(r'$k_e^*$')
plt.xlabel(r'$B_{syn}$')
plt.legend(title=r'$ln(T+l)$')
plt.savefig('../plots/k5/ke_vs_bsyn_dyn.png')
plt.savefig('../plots/k5/ke_vs_bsyn_dyn.pdf')
plt.show()
# -
# # All dynamics in one plot
# lets get the dynamics plots all in one place for $k=5$
# +
fig = plt.figure(constrained_layout=True, figsize=(8, 6))
ax = fig.subplot_mosaic([['A', 'A'],
['B', 'C']])
ax['B'].scatter(ipm_dyn['B_syn'], ipm_dyn['mean_transient'],
facecolors='none', edgecolors='C0')
ax['B'].set_yscale('log')
ax['B'].set_xlabel(r'$B_{syn}$')
ax['B'].set_ylabel(r'Transient')
ax['C'].scatter(ipm_dyn['ke*'], ipm_dyn['mean_transient'],
facecolors='none', edgecolors='C0')
ax['C'].set_yscale('log')
ax['C'].set_xlabel(r'$k_e^*$')
ipm_dyn['entropy'] = ipm_dyn['rule'].apply(lambda x: rule_to_ent(x))
ent_vals = sorted(np.unique(ipm_dyn['entropy']))
se_periods = []
periods = []
se_transients = []
transients = []
for l in ent_vals:
ld = ipm_dyn[ipm_dyn['entropy'] == l]
periods.append(np.mean(ld['mean_period'].dropna()))
se_periods.append(np.std(ld['mean_period'].dropna() / np.sqrt(len(ld['mean_period']))))
transients.append(np.mean(ld['mean_transient'].dropna()))
se_transients.append(np.std(ld['mean_transient'].dropna() / np.sqrt(len(ld['mean_transient']))))
print(len(ent_vals))
# convert all to numpy arrays for easy math later
se_periods = np.array(se_periods)
periods = np.array(periods)
se_transients = np.array(se_transients)
transients = np.array(transients)
ax['A'].plot(ent_vals, periods, label='Period', marker='^', mfc='white', mec='C0')
ax['A'].fill_between(ent_vals, periods - se_periods, periods + se_periods, color='C0', alpha = 0.4)
ax['A'].plot(ent_vals, transients, label='Transient',
marker='s', mfc='white', mec='C1')
ax['A'].fill_between(ent_vals, transients - se_transients, transients + se_transients, color='C1', alpha = 0.4)
ax['A'].set_xlabel(r'$H_{out}$')
ax['A'].set_ylabel(r'Timesteps')
ax['A'].set_yscale('log')
ax['A'].legend(loc='upper left')
# get things situated
plt.tight_layout()
plt.savefig('../plots/k5/all_dynamics.pdf')
plt.savefig('../plots/k5/all_dynamics.svg')
plt.savefig('../plots/k5/all_dynamics.png')
plt.show()
# -
| notebooks/k5_samples_plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deep Learning
#
#
# ### Neural Networks
# A neural network is composed of layers, each containing neurons or units as they are called nowadays, see image. The goal in Deep Learning is to create powerful models that learn features in the input data. A neural network is a function composition of layers.
#
#
# ### Improve training
# The training can be helped by using
# * Batch normalization
# * Regularization: L1, L2, Dropout (Dense, CNN layers)
# * Use of the right optimizer: RMSProp, Adam or Nadam
# * Right activation function: ReLu, ELU, SELU, tanh
# * Hyperparameter search: Random Search, Bayesian Optimization or Hyperband
from IPython.display import Image
Image(filename="DeepLearn.png")
# # DNN on CIFAR10
# In this exercise you are supposed to do a DNN with dense layers. The task is to train a very deep DNN and make predictions on the CIFAR10 dataset. This is a very well known dataset used as a benchmark in computer vision.
#
# * Create a 20 layered NN with 100 neurons
# * Use He initialization
# * Use ELU activation function
# * USe Nadam
# * Use early stopping
import tensorflow as tf
from functools import partial
tf.random.set_seed(42)
# Download data from keras. Use the cifar 10.
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train.shape, x_train[0, :, :].max(), x_train[0, :, :].min()
x_train, x_test = x_train/255.0, x_test/255.0
# # Create model
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1 ** (epoch/s)
return exponential_decay_fn
def create_dnn(
activation='elu',
kernel_initializer='he_normal',
units=100,
n_layers=20,
batch_normalization=False
):
partial_dnn = partial(tf.keras.layers.Dense,
activation=activation,
kernel_initializer=kernel_initializer)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[32, 32, 3]))
for i in range(n_layers):
model.add(partial_dnn(units))
if batch_normalization:
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dense(10, activation='softmax'))
optimizer = tf.keras.optimizers.Nadam()
model.compile(optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
model = create_dnn(batch_normalization=True)
model.summary()
# # Train Model
# +
exponential_decay_fn = exponential_decay(lr0=0.01, s=10)
exp_schedule = tf.keras.callbacks.LearningRateScheduler(exponential_decay_fn)
early_stopping = tf.keras.callbacks.EarlyStopping(patience=10)
callbacks = [early_stopping,
exp_schedule]
history = model.fit(x_train, y_train,
validation_data=(x_test, y_test),
epochs=100, batch_size=32, # 2 ** 5
callbacks=callbacks
)
# -
import pandas as pd
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(8, 5), grid=True)
plt.gca().set_ylim(0, 1)
plt.show();
# ### Exercise
# Replace batch-normalization with the SELU activation function. Is the performance better?
# * Normalize the input
# * Use LeCun normal initialization
# * Make sure that the DNN contains only a sequence of dense layers
| notebooks/.ipynb_checkpoints/DNN on CIFAR10 tf.keras Batch Normalization Lecture-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="bEqdz1ZUMaj1"
# ## Loading of Miller ECoG data of motor imagery
#
# includes some visualizations
# + id="TLWjKq8bLDqm"
#@title Data retrieval
import os, requests
fname = 'motor_imagery.npz'
url = "https://osf.io/ksqv8/download"
if not os.path.isfile(fname):
try:
r = requests.get(url)
except requests.ConnectionError:
print("!!! Failed to download data !!!")
else:
if r.status_code != requests.codes.ok:
print("!!! Failed to download data !!!")
else:
with open(fname, "wb") as fid:
fid.write(r.content)
# + id="raBVOEWgUK_B" cellView="form"
#@title Import matplotlib and set defaults
from matplotlib import rcParams
from matplotlib import pyplot as plt
rcParams['figure.figsize'] = [20, 4]
rcParams['font.size'] =15
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
rcParams['figure.autolayout'] = True
# + id="sffzC_hyLgWZ" colab={"base_uri": "https://localhost:8080/"} outputId="bee233c7-afa9-497b-d17d-c0c4bddee4c9"
#@title Data loading
import numpy as np
alldat = np.load(fname, allow_pickle=True)['dat']
# select just one of the recordings here. 11 is nice because it has some neurons in vis ctx.
dat1 = alldat[0][0]
dat2 = alldat[0][1]
print(dat1.keys())
print(dat2.keys())
# + [markdown] id="5K7UT7dyj_6R"
# # Dataset info #
#
# This is one of multiple ECoG datasets from Miller 2019, recorded in a clinical settings with a variety of tasks. Raw data and dataset paper are here:
#
# https://exhibits.stanford.edu/data/catalog/zk881ps0522
# https://www.nature.com/articles/s41562-019-0678-3
#
# This particular dataset was originally described in this paper:
#
# *Miller, <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. "Cortical activity during motor execution, motor imagery, and imagery-based online feedback." Proceedings of the National Academy of Sciences (2010): 200913697.*
#
# `dat1` and `dat2` are data from the two blocks performed in each subject. The first one was the actual movements, the second one was motor imagery. For the movement task, from the original dataset instructions:
#
# *Patients performed simple, repetitive, motor tasks of hand (synchronous flexion and extension of all fingers, i.e., clenching and releasing a fist at a self-paced rate of ~1-2 Hz) or tongue (opening of mouth with protrusion and retraction of the tongue, i.e., sticking the tongue in and out, also at ~1-2 Hz). These movements were performed in an interval-based manner, alternating between movement and rest, and the side of move- ment was always contralateral to the side of cortical grid placement.*
#
# For the imagery task, from the original dataset instructions:
#
# *Following the overt movement experiment, each subject performed an imagery task, imagining making identical movement rather than executing the movement. The imagery was kinesthetic rather than visual (“imagine yourself performing the actions like you just did”; i.e., “don’t imagine what it looked like, but imagine making the motions”).*
#
# Sample rate is always 1000Hz, and the ECoG data has been notch-filtered at 60, 120, 180, 240 and 250Hz, followed by z-scoring across time and conversion to float16 to minimize size. Please convert back to float32 after loading the data in the notebook, to avoid unexpected behavior.
#
# Both experiments:
# * `dat['V']`: continuous voltage data (time by channels)
# * `dat['srate']`: acquisition rate (1000 Hz). All stimulus times are in units of this.
# * `dat['t_on']`: time of stimulus onset in data samples
# * `dat['t_off']`: time of stimulus offset, always 400 samples after `t_on`
# * `dat['stim_id`]: identity of stimulus (11 = tongue, 12 = hand), real or imaginary stimulus
# * `dat['scale_uv']`: scale factor to multiply the data values to get to microvolts (uV).
# * `dat['locs`]`: 3D electrode positions on the brain surface
#
#
# + id="TSf8XWng6RyX"
# quick way to get broadband power in time-varying windows
from scipy import signal
# pick subject 0 and experiment 0 (real movements)
dat1 = alldat[0][0]
# V is the voltage data
V = dat1['V'].astype('float32')
# high-pass filter above 50 Hz
b, a = signal.butter(3, [50], btype = 'high', fs=1000)
V = signal.filtfilt(b,a,V,0)
# compute smooth envelope of this signal = approx power
V = np.abs(V)**2
b, a = signal.butter(3, [10], btype = 'low', fs=1000)
V = signal.filtfilt(b,a,V,0)
# normalize each channel so its mean power is 1
V = V/V.mean(0)
# + id="_y72uLCt_KKG"
# average the broadband power across all tongue and hand trials
nt, nchan = V.shape
nstim = len(dat1['t_on'])
trange = np.arange(0, 2000)
ts = dat1['t_on'][:,np.newaxis] + trange
V_epochs = np.reshape(V[ts, :], (nstim, 2000, nchan))
V_tongue = (V_epochs[dat1['stim_id']==11]).mean(0)
V_hand = (V_epochs[dat1['stim_id']==12]).mean(0)
# + id="mmOarX5w16CR"
# let's find the electrodes that distinguish tongue from hand movements
# note the behaviors happen some time after the visual cue
from matplotlib import pyplot as plt
plt.figure(figsize=(20,10))
for j in range(46):
ax = plt.subplot(5,10,j+1)
plt.plot(trange, V_tongue[:,j])
plt.plot(trange, V_hand[:,j])
plt.title('ch%d'%j)
plt.xticks([0, 1000, 2000])
plt.ylim([0, 4])
# + id="eGSL0nujEJEt"
# let's look at all the trials for electrode 20 that has a good response to hand movements
# we will sort trials by stimulus id
plt.subplot(1,3,1)
isort = np.argsort(dat1['stim_id'])
plt.imshow(V_epochs[isort,:,20].astype('float32'), aspect='auto', vmax=7, vmin = 0, cmap = 'magma')
plt.colorbar()
# + id="h9Ck9YmcEiNG"
# Electrode 42 seems to respond to tongue movements
isort = np.argsort(dat1['stim_id'])
plt.subplot(1,3,1)
plt.imshow(V_epochs[isort,:,42].astype('float32'), aspect='auto', vmax=7, vmin = 0, cmap = 'magma')
plt.colorbar()
# + id="Gda8DfWlCilR"
| projects/ECoG/load_ECoG_motor_imagery.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import utils
# # Data Loading
data = np.load('data.npz')
x, y = data['x'], data['y']
x = np.concatenate([np.ones((x.shape[0], 1)), x], axis=1)
t = np.arange(-1, 1, 0.01).reshape((-1, 1))
t = np.concatenate([np.ones((t.shape[0], 1)), t], axis=1)
x_small, y_small = x[:15], y[:15]
# Ex1 Build a scatter plot for x_small, y_small. You may want to look at plt.scatter
plt.scatter(x_small[:, 1], y_small)
# ## Simple Linear Regression
# Ex2 Fit a simple linear regression with lr=0.05 and plot the evolution of losses. You may want to look at utils file and at plt.plot
opt = utils.GD(0.05)
lr = utils.LR(num_features=2, optimizer=opt)
losses = lr.fit(x_small, y_small)
plt.plot(losses)
# Ex3 Calculate model predictions over the values of t and plot them together with the input data
y_pred = lr.predict(t)
plt.scatter(x_small[:, 1], y_small, color='blue')
plt.plot(t[:, 1], y_pred, color='red')
# ## Polynomial Regression
# Ex4 Define a function which takes a matrix x (first column is constant 1), int deg and returns matrix x_poly which has first column as constant 1 and other columns
# are initial columns to the powers k, k=1..deg.
def make_poly(x, deg):
x_poly = x[:, 1:]
#polys = []
for k in range(2, deg+1):
x_poly = np.concatenate([x_poly, x[:, 1:] ** k], axis=1)
x_poly = np.concatenate([np.ones((x.shape[0], 1)), x_poly], axis=1)
return x_poly
def make_funcs(x, funcs):
x_poly = x[:, 1:]
#polys = []
for f in funcs:
x_poly = np.concatenate([x_poly, f(x[:, 1:])], axis=1)
x_poly = np.concatenate([np.ones((x.shape[0], 1)), x_poly], axis=1)
return x_poly
x_test = np.array([
[1, 2, 3],
[1, 4, 5]])
y_res = np.array([[ 1., 2., 3., 4., 9., 8., 27.],
[ 1., 4., 5., 16., 25., 64., 125.]])
make_poly(x_test, 4)
assert np.allclose(make_poly(x_test, 3), y_res), print('Something is wrong')
# Ex5 Build polynomial regressions for all degrees from 1 to 25 and store their losses. For this exercise use fit_closed_form method instead of GD
lrs = {'models': [], 'losses': []}
for k in range(1, 26):
x_poly_k = make_poly(x_small, k)
lrs['models'].append(utils.LR(num_features=x_poly_k.shape[1]))
loss = lrs['models'][-1].fit_closed_form(x_poly_k, y_small)
lrs['losses'].append(loss)
plt.scatter(list(range(1, 26)), lrs['losses'])
# Ex6 plot the predicted values over t and scatter of true points for some models
plt.figure(figsize=(20, 10))
for k in range(1, 26):
t_poly = make_poly(t, k)
lr = lrs['models'][k-1]
y_pred = lr.predict(t_poly)
plt.subplot(5, 5, k)
plt.scatter(x_small[:, 1], y_small, color='blue')
plt.plot(t[:, 1], y_pred, color='red')
plt.ylim((-0.15, 0.15))
plt.title(f'deg={k}')
plt.show()
# ## Overfit/Underfit
# Ex7 Modify the regression's fit method to also get some validation data and output losses over validation data
class LR_valid:
def __init__(self, num_features=1, optimizer=utils.GD(0.1)):
self.W = np.zeros((num_features, 1))
self.optimizer = optimizer
def predict(self, X):
y = X @ self.W
return y
def one_step_opt(self, X, y_true):
grads = - X.T @ (y_true - X @ self.W) / X.shape[0]
self.W = self.optimizer.apply_grads(self.W, grads)
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss, grads
def fit(self, X, y_true, X_valid=None, y_valid=None, grad_tol=0.0001, n_iters=1000):
grad_norm = np.inf
n_iter = 0
losses = []
valid_losses = []
while (grad_norm > grad_tol) and (n_iter < n_iters):
loss, grads = self.one_step_opt(X, y_true)
grad_norm = np.linalg.norm(grads)
n_iter += 1
losses.append(loss[0][0])
valid_loss = (y_valid - self.predict(X_valid)).T @ (y_valid - self.predict(X_valid)) / X_valid.shape[0]
valid_losses.append(valid_loss[0][0])
return losses, valid_losses
def fit_closed_form(self, X, y_true, X_valid=None, y_valid=None):
self.W = np.linalg.inv(X.T @ X) @ X.T @ y_true
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss
k = 3
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
lr = LR_valid(num_features=x_poly.shape[1])
losses, valid_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.figure(figsize=(20, 10))
for k in range(1, 26):
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
lr = LR_valid(num_features=x_poly.shape[1])
losses, valid_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
plt.subplot(5, 5, k)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.ylim((0, 0.005))
plt.title(f'deg={k}')
plt.show()
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
k = 12
x_poly = make_poly(x[:8000], k)
x_valid = make_poly(x[8000:], k)
y_valid = y[8000:]
lr = LR_valid(num_features=x_poly.shape[1])
losses, valid_losses = lr.fit(x_poly, y[:8000], X_valid=x_valid, y_valid=y_valid)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
t_poly = make_poly(t, k)
y_pred = lr.predict(t_poly)
plt.plot(t[:, 1], y_pred, color='red')
plt.scatter(x[8000:, 1], y[8000:], color='blue')
# Ex8 Find train and valid losses for all polynomial models
lrs = {'models': [], 'losses': [], 'train_loss_history': [], 'valid_loss_history': []}
for k in range(1, 26):
x_poly_k = make_poly(x_small, k)
pass
# Ex9 Do the same thing as Ex8, but instead of using 15 samples use 5000
pass
# ### Regularization
# Ex10 Implement L2 and L1 regularizations
# $$J(W) = J_{old}(W) + alpha * (w_1^2 + ... + w_{p}^2)$$
# $$J_{old}(W)$$
class LR_valid_L2:
def __init__(self, num_features=1, optimizer=utils.GD(0.05), alpha=0):
self.W = np.zeros((num_features, 1)) + 5
self.optimizer = optimizer
self.alpha = alpha
def predict(self, X):
y = X @ self.W
return y
def one_step_opt(self, X, y_true):
grad_reg = 2 * self.alpha * self.W
grad_reg[0, 0] = 0
reg_loss = np.sum(self.alpha * (self.W) ** 2)
grads = - X.T @ (y_true - X @ self.W) / X.shape[0] + grad_reg
self.W = self.optimizer.apply_grads(self.W, grads)
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss, grads, reg_loss
def fit(self, X, y_true, X_valid=None, y_valid=None, grad_tol=0.0001, n_iters=10000):
grad_norm = np.inf
n_iter = 0
losses = []
valid_losses = []
reg_losses = []
while (grad_norm > grad_tol) and (n_iter < n_iters):
loss, grads, reg_loss = self.one_step_opt(X, y_true)
grad_norm = np.linalg.norm(grads)
n_iter += 1
losses.append(loss[0][0])
valid_loss = (y_valid - self.predict(X_valid)).T @ (y_valid - self.predict(X_valid)) / X_valid.shape[0]
valid_losses.append(valid_loss[0][0])
reg_losses.append(reg_loss)
return np.array(losses), np.array(valid_losses), np.array(reg_losses)
def fit_closed_form(self, X, y_true, X_valid=None, y_valid=None):
self.W = np.linalg.inv(X.T @ X) @ X.T @ y_true
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss
#plt.figure(figsize=(10, 10))
terminal_losses = []
alphas = np.arange(0, 0.01, 0.001)
Ws = []
for alpha in alphas:
k = 17
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
lr = LR_valid_L2(num_features=x_poly.shape[1], alpha=alpha)
losses, valid_losses, reg_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
terminal_losses.append(valid_losses[-1])
Ws.append(lr.W[2])
total_losses = losses + reg_losses
'''
plt.subplot(2, 1, 1)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.plot(reg_losses, color='green')
plt.plot(total_losses, color='black')
plt.ylim((0, 0.005))
plt.title(f'deg={k}')
t_poly = make_poly(t, k)
y_pred = lr.predict(t_poly)
plt.subplot(2, 1, 2)
plt.plot(t[:, 1], y_pred, color='red')
plt.plot(t[:, 1], y_pred, color='red')
plt.scatter(x[8000:, 1], y[8000:], color='blue')
plt.scatter(x_small[:, 1], y_small, color='pink', s=50)
plt.ylim((-0.15, 0.15))
plt.title(f'deg={k}')
plt.show()
'''
plt.plot(alphas, Ws)
#plt.ylim((0, 0.01))
# $$J(W) = J_{old}(W) + alpha * (|w_1| + ... + |w_p|)$$
class LR_valid_L1:
def __init__(self, num_features=1, optimizer=utils.GD(0.05), alpha=0):
self.W = np.zeros((num_features, 1)) + 5
self.optimizer = optimizer
self.alpha = alpha
def predict(self, X):
y = X @ self.W
return y
def one_step_opt(self, X, y_true):
grad_reg = self.alpha * np.sign(self.W)
grad_reg[0, 0] = 0
reg_loss = np.sum(self.alpha * np.abs(self.W))
grads = - X.T @ (y_true - X @ self.W) / X.shape[0] + grad_reg
self.W = self.optimizer.apply_grads(self.W, grads)
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss, grads, reg_loss
def fit(self, X, y_true, X_valid=None, y_valid=None, grad_tol=0.0001, n_iters=10000):
grad_norm = np.inf
n_iter = 0
losses = []
valid_losses = []
reg_losses = []
while (grad_norm > grad_tol) and (n_iter < n_iters):
loss, grads, reg_loss = self.one_step_opt(X, y_true)
grad_norm = np.linalg.norm(grads)
n_iter += 1
losses.append(loss[0][0])
valid_loss = (y_valid - self.predict(X_valid)).T @ (y_valid - self.predict(X_valid)) / X_valid.shape[0]
valid_losses.append(valid_loss[0][0])
reg_losses.append(reg_loss)
return np.array(losses), np.array(valid_losses), np.array(reg_losses)
def fit_closed_form(self, X, y_true, X_valid=None, y_valid=None):
self.W = np.linalg.inv(X.T @ X) @ X.T @ y_true
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss
# +
plt.figure(figsize=(10, 10))
terminal_losses = []
alphas = np.arange(0.01, 0.005, 0.001)
Ws = []
alpha = 0.003
#for alpha in alphas:
k = 17
x_poly = make_poly(x_small, k)
mu = np.mean(x_poly, axis=0)
std = np.std(x_poly, axis=0)
x_poly = (x_poly - mu) / std
x_poly[:, 0] = 1
x_valid = make_poly(x[100:1100], k)
x_valid = (x_valid - mu) / std
x_valid[:, 0] = 1
y_valid = y[100:1100]
lr = LR_valid_L1(num_features=x_poly.shape[1], alpha=alpha)
losses, valid_losses, reg_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
terminal_losses.append(valid_losses[-1])
Ws.append(lr.W[2])
total_losses = losses + reg_losses
plt.subplot(2, 1, 1)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.plot(reg_losses, color='green')
plt.plot(total_losses, color='black')
#plt.ylim((0, 0.005))
plt.title(f'deg={k}')
t_poly = make_poly(t, k)
y_pred = lr.predict(t_poly)
plt.subplot(2, 1, 2)
plt.plot(t[:, 1], y_pred, color='red')
plt.plot(t[:, 1], y_pred, color='red')
plt.scatter(x[8000:, 1], y[8000:], color='blue')
plt.scatter(x_small[:, 1], y_small, color='pink', s=50)
plt.ylim((-0.15, 0.15))
plt.title(f'deg={k}')
plt.show()
# +
terminal_losses = []
alphas = np.arange(0.009, 0.01, 0.0001)
Ws = []
k = 17
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
for alpha in alphas:
lr = LR_valid_L1(num_features=x_poly.shape[1], alpha=alpha)
losses, valid_losses, reg_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
terminal_losses.append(valid_losses[-1])
Ws.append(lr.W[2])
total_losses = losses + reg_losses
# -
| atilla_gosha/Regularization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix, hstack
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import KFold
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import PReLU
from keras.callbacks import EarlyStopping
# -
np.random.seed(2017)
# +
train = pd.read_csv('train.csv')
train.head()
# +
df_train = train.sample(n=100000)
y = np.log( df_train['loss'].values )
sparse_data = []
# -
# ### Categorical Variables
feat_cats = [f for f in df_train.columns if 'cat' in f]
for feat in feat_cats:
dummy = pd.get_dummies(df_train[feat].astype('category'))
tmp = csr_matrix(dummy)
sparse_data.append(tmp)
# ### Continuous Variables
f_num = [f for f in df_train.columns if 'cont' in f]
scaler = StandardScaler()
tmp = csr_matrix(scaler.fit_transform(df_train[f_num]))
sparse_data.append(tmp)
X = hstack(sparse_data, format = 'csr')
X
def nn_model(input_dim):
model = Sequential()
model.add(Dense(400, input_dim = input_dim, kernel_initializer = 'he_normal'))
model.add(PReLU())
model.add(BatchNormalization())
model.add(Dropout(0.45))
model.add(Dense(200, kernel_initializer = 'he_normal'))
model.add(PReLU())
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(50, kernel_initializer = 'he_normal'))
model.add(PReLU())
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1, kernel_initializer = 'he_normal'))
model.compile(loss = 'mae', optimizer = 'adadelta')
return(model)
# +
def batch_generator(X, y, batch_size, shuffle):
#chenglong code for fiting from generator (https://www.kaggle.com/c/talkingdata-mobile-user-demographics/forums/t/22567/neural-network-for-sparse-matrices)
number_of_batches = np.ceil(X.shape[0]/batch_size)
counter = 0
sample_index = np.arange(X.shape[0])
if shuffle:
np.random.shuffle(sample_index)
while True:
batch_index = sample_index[batch_size*counter:batch_size*(counter+1)]
X_batch = X[batch_index,:].toarray()
y_batch = y[batch_index]
counter += 1
yield X_batch, y_batch
if (counter == number_of_batches):
if shuffle:
np.random.shuffle(sample_index)
counter = 0
def batch_generatorp(X, batch_size, shuffle):
number_of_batches = X.shape[0] / np.ceil(X.shape[0]/batch_size)
counter = 0
sample_index = np.arange(X.shape[0])
while True:
batch_index = sample_index[batch_size * counter:batch_size * (counter + 1)]
X_batch = X[batch_index, :].toarray()
counter += 1
yield X_batch
if (counter == number_of_batches):
counter = 0
# +
nepochs = 2
nfolds = 3
folds = KFold(len(y), n_folds=nfolds, shuffle = True, random_state = 2017)
for num_iter, (train_index, test_index) in enumerate(folds):
X_train, y_train = X[train_index], y[train_index]
X_test, y_test = X[test_index], y[test_index]
model = nn_model(X_train.shape[1])
callbacks=[EarlyStopping(patience=8)]
model.fit_generator(generator = batch_generator(X_train, y_train, 128, True),
epochs = nepochs,
samples_per_epoch = y_train.shape[0],
validation_data=(X_test.todense(), y_test),
verbose = 2, callbacks=callbacks)
y_pred = np.exp(model.predict_generator(generator = batch_generatorp(X_test, 128, False), val_samples = X_test.shape[0])[:,0])
score = mean_absolute_error(np.exp(y_test), y_pred)
print("Fold{0}, score={1}".format(num_iter+1, score))
# -
# ## Task
#
# Play aroud with NN architecture. First version is here:
#
# - input
# - hidden1: 400
# - drouput + bn
# - hidden2: 200
# - drouput + bn
# - hidden3: 50
# - output
#
#
# try to change something (remove layer, add a new one, change attribute in dropout and so on)
#
| step3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Setup a classification experiment
# +
import pandas as pd
from sklearn.model_selection import train_test_split
df = pd.read_csv(
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
header=None)
df.columns = [
"Age", "WorkClass", "fnlwgt", "Education", "EducationNum",
"MaritalStatus", "Occupation", "Relationship", "Race", "Gender",
"CapitalGain", "CapitalLoss", "HoursPerWeek", "NativeCountry", "Income"
]
train_cols = df.columns[0:-1]
label = df.columns[-1]
X = df[train_cols]
y = df[label].apply(lambda x: 0 if x == " <=50K" else 1) #Turning response into 0 and 1
seed = 1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)
# -
# ## Explore the dataset
# +
from interpret import show
from interpret.data import ClassHistogram
hist = ClassHistogram().explain_data(X_train, y_train, name = 'Train Data')
show(hist)
# -
# ## Train the Explainable Boosting Machine (EBM)
# +
from interpret.glassbox import ExplainableBoostingClassifier, LogisticRegression, ClassificationTree, DecisionListClassifier
ebm = ExplainableBoostingClassifier(random_state=seed)
ebm.fit(X_train, y_train) #Works on dataframes and numpy arrays
# -
# ## Global Explanations: What the model learned overall
ebm_global = ebm.explain_global(name='EBM')
show(ebm_global)
# ## Local Explanations: How an individual prediction was made
ebm_local = ebm.explain_local(X_test[:5], y_test[:5], name='EBM')
show(ebm_local)
# ## Evaluate EBM performance
# +
from interpret.perf import ROC
ebm_perf = ROC(ebm.predict_proba).explain_perf(X_test, y_test, name='EBM')
show(ebm_perf)
# -
# ## Let's test out a few other Explainable Models
# +
from interpret.glassbox import LogisticRegression, ClassificationTree
# We have to transform categorical variables to use Logistic Regression and Decision Tree
X_enc = pd.get_dummies(X, prefix_sep='.')
feature_names = list(X_enc.columns)
X_train_enc, X_test_enc, y_train, y_test = train_test_split(X_enc, y, test_size=0.20, random_state=seed)
lr = LogisticRegression(random_state=seed, feature_names=feature_names, penalty='l1')
lr.fit(X_train_enc, y_train)
tree = ClassificationTree()
tree.fit(X_train_enc, y_train)
# -
# ## Compare performance using the Dashboard
# +
lr_perf = ROC(lr.predict_proba).explain_perf(X_test_enc, y_test, name='Logistic Regression')
tree_perf = ROC(tree.predict_proba).explain_perf(X_test_enc, y_test, name='Classification Tree')
show(lr_perf)
show(tree_perf)
show(ebm_perf)
# -
# ## Glassbox: All of our models have global and local explanations
# +
lr_global = lr.explain_global(name='LR')
tree_global = tree.explain_global(name='Tree')
show(lr_global)
show(tree_global)
show(ebm_global)
# -
# ## Dashboard: look at everything at once
# +
# Do everything in one shot with the InterpretML Dashboard by passing a list into show
show([hist, lr_global, lr_perf, tree_global, tree_perf, ebm_global, ebm_perf], share_tables=True)
# -
| examples/notebooks/Interpretable Classification Methods.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Racket
# language: racket
# name: racket
# ---
# #Linear regression
#
# Here we
# * make up a regression model,
# * sample some points from it, and
# * see how well we can recover the true parameters.
#
# Though it's very simple, this example illustrates basic Gamble usage, including observing functions of random variables, and some iGamble visualization tools.
(require gamble
racket/class
racket/list
racket/vector
racket/port
"lr_helpers.rkt"
"c3_helpers.rkt")
# ##Make up a true regression model
#
# We pick a slope, a y-intercept, and an error sigma, which we will use to generate some data.
#
# $$y_i \sim N(mx_i + b,\sigma)$$
#
# We want to recover the true parameters from the data.
(define true-slope 3)
(define true-intercept 12)
(define true-noise-sigma 15)
# ##Make some data
#
# We will collect random $y$ values for 30 evenly-spaced $x$.
(define n-points 30)
(define xs (build-vector n-points (lambda (x) x)))
(define ys (build-vector n-points
(lambda (x) (+ (* x true-slope)
true-intercept
(normal 0 true-noise-sigma)))))
(scatter-c3 (vector->list xs) (vector->list ys) #:legend "sample points")
# ##Make a generative model inside a sampler
#
# This is the core probabilistic program, here. Not much to it.
(define lr
(mh-sampler
(define slope (normal 0 10))
(define intercept (normal 0 10))
(define noise-sigma (gamma 1 15))
(define (f x)
(+ (* slope x) intercept (normal 0 noise-sigma)))
(for ([x xs]
[y ys])
(observe (f x) y))
(vector slope intercept noise-sigma)))
(define results (sample-and-monitor lr 50 #:burn 500 #:thin 50))
(show-acc-rates results)
(show-scale-changes results)
(show-query-means results 0 #:ylabel "slope")
(show-query-means results 1 #:ylabel "intercept")
(show-query-means results 2 #:ylabel "noise-sigma")
# ##Visualizing posterior samples
#
# With a little bit of racket hacking, we can superimpose regression lines defined by our sampled slope and intercept values on the observed data points.
(let ([x-min 0]
[x-max 29]
[s (just-the-samples results)])
(let ([rxs (for/list ([i 50])
(list x-min x-max))]
[rys (map
(lambda (smp) (list
(+ (vector-ref smp 1) (* x-min (vector-ref smp 0)))
(+ (vector-ref smp 1) (* x-max (vector-ref smp 0)))))
(just-the-samples results))])
(multi-reg-line-plus-scatter-c3 rxs rys (vector->list xs) (vector->list ys))))
(dist-pdf (beta-dist 2 1) 0.3)
(let ([pts (range 0.0001 0.9999 0.001)])
(line-c3 pts (map (lambda (x) (dist-pdf (beta-dist 2 1) x)) pts)))
| sean_notebooks/linear_regression/linear_regression_tiny.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="LMp5lKNjgBXq"
# ## Initial setup
# + colab={"base_uri": "https://localhost:8080/"} id="CDVgw5FnT6Hc" outputId="6ead78b1-2265-4d07-9ffb-c216828f43c1" executionInfo={"status": "ok", "timestamp": 1647741255348, "user_tz": 240, "elapsed": 19409, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09753690843302881653"}}
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="lIYdn1woOS1n" outputId="e3952904-bc6a-4951-cbfe-f73f4bd4c1c8" executionInfo={"status": "ok", "timestamp": 1647741267539, "user_tz": 240, "elapsed": 12195, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09753690843302881653"}}
import tensorflow as tf
print(tf.__version__)
import torch
print(torch.__version__)
import matplotlib
print(matplotlib.__version__)
# + colab={"base_uri": "https://localhost:8080/"} id="dZowsDvOYK37" outputId="aa5f2cda-52b5-45ab-8ab1-b5e784e1df38" executionInfo={"status": "ok", "timestamp": 1642794092303, "user_tz": 300, "elapsed": 341, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09753690843302881653"}}
# !nvidia-smi
# + colab={"base_uri": "https://localhost:8080/"} id="421MrJNMYQD7" outputId="f74bde95-81b7-409f-90e3-e04745b86e9c" executionInfo={"status": "ok", "timestamp": 1647741292540, "user_tz": 240, "elapsed": 25005, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09753690843302881653"}}
# Other imports
# ! pip install tensorflow_addons
# ! pip install tensorflow_io
import os
from tensorflow.keras.layers import *
from tensorflow.keras.models import *
from keras.callbacks import Callback, EarlyStopping, ModelCheckpoint
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.preprocessing.image import load_img
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
from matplotlib.ticker import MultipleLocator, FormatStrFormatter, AutoMinorLocator
from imutils import paths
from tqdm import tqdm
import tensorflow as tf
import tensorflow_addons as tfa
import tensorflow_datasets as tfds
import tensorflow_io as tfio
import tensorflow_hub as hub
import numpy as np
import cv2
import pandas as pd
import seaborn as sns
from scipy.stats import mannwhitneyu
from sklearn.preprocessing import LabelEncoder
from sklearn.cluster import KMeans
import sklearn.manifold
from sklearn.metrics.pairwise import cosine_similarity as cos
from sympy.utilities.iterables import multiset_permutations
from sklearn.metrics import accuracy_score, f1_score,precision_score, recall_score, roc_auc_score, confusion_matrix
from sklearn.model_selection import *
from sklearn.preprocessing import StandardScaler
from IPython.display import Image, display
import zipfile
import concurrent.futures
# Random seed fix
random_seed = 42
tf.random.set_seed(random_seed)
np.random.seed(random_seed)
# + [markdown] id="UUFlGxuJgBX9"
# ## Dataset gathering and preparation
# + id="TMkDpqWQDwuN"
# %cd /content/drive/MyDrive/nanowire-morphology-classification-project
# + id="YSsV0C11n90h"
training_batch_size = 4
BATCH_SIZE = training_batch_size
imageSize = 224
category_names = ['bundle', 'dispersed', 'network', 'singular']
color_method = ['C0', 'C1', 'C2', 'C3', 'C4']
color = ['black', 'magenta', 'cyan', 'yellow']
marker = ['o', 's', '<', '>', '^']
seaborn_palette = sns.color_palette("colorblind")
# + [markdown] id="0Jrpko7UTkZg"
# # generating the jpg images
# + colab={"base_uri": "https://localhost:8080/"} id="mwkedK8tIURC" outputId="855da46d-b951-4bfe-e6d2-4c028867eab0" executionInfo={"status": "ok", "timestamp": 1642973842947, "user_tz": 300, "elapsed": 9897, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09753690843302881653"}}
# generating jpg images from the original tif images
np.random.seed(random_seed)
peptide_morph_train_path = "/content/drive/MyDrive/TEM image datasets/2022-nanowire-morphology"
peptide_morph_images_train = list(paths.list_files(basePath=peptide_morph_train_path, validExts='tif'))
peptide_morph_images_train = np.random.choice(np.array(peptide_morph_images_train), len(peptide_morph_images_train), replace=False)
print(len(peptide_morph_images_train))
for i in range(len(peptide_morph_images_train)):
img = cv2.imread(peptide_morph_images_train[i])
if img is None:
continue
img = cv2.imread(peptide_morph_images_train[i])
cv2.imwrite('%s.jpg' %peptide_morph_images_train[i].split(".")[0], img)
# + [markdown] id="FZ1mARYATq9t"
# ## image data augmentation for the singular morphology
# + id="DWYMtWk1KZob"
# generating augmented images for the singular morphology
np.random.seed(random_seed)
peptide_morph_train_path = "/content/drive/MyDrive/TEM image datasets/2022-nanowire-morphology/singular"
peptide_morph_images_train = list(paths.list_files(basePath=peptide_morph_train_path, validExts='jpg'))
peptide_morph_images_train = np.random.choice(np.array(peptide_morph_images_train), len(peptide_morph_images_train), replace=False)
print(len(peptide_morph_images_train))
for i in range(len(peptide_morph_images_train)):
# these are for augmentating the singular morphology by generating 3 different rotated images (90, 180, 270 degrees)
img = cv2.rotate(img, cv2.cv2.ROTATE_90_CLOCKWISE)
cv2.imwrite('%s_1.jpg' %peptide_morph_images_train[i].split(".")[0], img)
img = cv2.rotate(img, cv2.cv2.ROTATE_90_CLOCKWISE)
cv2.imwrite('%s_2.jpg' %peptide_morph_images_train[i].split(".")[0], img)
img = cv2.rotate(img, cv2.cv2.ROTATE_90_CLOCKWISE)
cv2.imwrite('%s_3.jpg' %peptide_morph_images_train[i].split(".")[0], img)
# + [markdown] id="s8JWMqy0Ta4a"
# # generating segmentation ground truth binary maps (seg_mask.npz files)
# + id="nbWIOZG8TUIt"
np.random.seed(random_seed)
peptide_morph_seglabel_train_path = "/content/drive/MyDrive/TEM image datasets/2022-nanowire-morphology"
peptide_morph_images_train_seglabel = list(paths.list_files(basePath=peptide_morph_seglabel_train_path, validExts='png'))
peptide_morph_images_train_seglabel = np.random.choice(np.array(peptide_morph_images_train_seglabel), len(peptide_morph_images_train_seglabel), replace=False)
print(len(peptide_morph_images_train_seglabel))
# + id="kw7Cl72CTUp5"
def generate_ground_truth_images(image, resolution):
image_bool = np.ones((resolution, resolution))
for i in range(image.shape[0]):
for j in range(image.shape[1]):
if image[i, j, 1] == image[i, j, 2]:
image_bool[i, j] = 0 # background is black with code of 0
else:
image_bool[i, j] = 1 # nanowire is white with code of 1
return image_bool
# + id="_IOlNzdGTZ7Z"
segmentation_class_labels = []
for i in range(peptide_morph_images_train_seglabel.shape[0]):
seg_class_label = peptide_morph_images_train_seglabel[i].split("/")[-2]
segmentation_class_labels.append(seg_class_label)
le = LabelEncoder()
peptide_morph_train_seg_enc = le.fit_transform(segmentation_class_labels)
# + id="DSvBz_iOB40u" colab={"base_uri": "https://localhost:8080/"} outputId="3bb22e1f-739a-4f64-8579-36947ce82ef9" executionInfo={"status": "ok", "timestamp": 1642974501258, "user_tz": 300, "elapsed": 27892, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09753690843302881653"}}
image_mask = np.zeros((len(peptide_morph_images_train_seglabel), imageSize, imageSize))
for i in range(len(peptide_morph_images_train_seglabel)):
# these were used to create the ground truth grayscale images from the manual segmentation labels.
image_string = tf.io.read_file(peptide_morph_images_train_seglabel[i])
image = tf.image.decode_image(image_string, channels=3) / 255
image = tf.image.resize(image, (imageSize, imageSize))
image = tf.image.convert_image_dtype(image, tf.float32)
trans_nd_image_array = image.numpy()
image_mask[i] = generate_ground_truth_images(trans_nd_image_array, imageSize)
np.savez_compressed('seg_mask_res512.npz', mask=image_mask)
# once we have the seg_mask saved, we can directly load from npz file
# image_mask = np.load('seg_mask_res%i.npz' % (imageSize), allow_pickle=True)['mask']
| notebooks/Preprocess images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Tutorial about setting up an analysis pipeline and batch processing
# Quite often you experiment with various analysis routines and appropriate parameters and come up with an analysis pipeline. A pipeline procedure then is a script defining analysis steps for a single locdata object (or a single group of corresponding locdatas as for instance used in 2-color measurements).
#
# The `Pipeline` class can be used to combine the pipeline code, metadata and analysis results in a single pickleable object (meaning it can be serialized by the python pickle module).
#
# This pipeline might then be applied to a number of similar datasets. A batch process is such a procedure for running a pipeline over multiple locdata objects and collecting and combing results.
# + tags=[]
from pathlib import Path
# %matplotlib inline
import matplotlib.pyplot as plt
import locan as lc
# + tags=[]
lc.show_versions(system=False, dependencies=False, verbose=False)
# -
# ## Apply a pipeline of different analysis routines
# ### Load rapidSTORM data file
# + tags=[]
path = lc.ROOT_DIR / 'tests/test_data/rapidSTORM_dstorm_data.txt'
print(path)
dat = lc.load_rapidSTORM_file(path=path, nrows=1000)
dat.print_summary()
# + tags=[]
dat.properties
# -
# ### Set up an analysis procedure
# First define the analysis procedure (pipeline) in form of a computation function. Make sure the first parameter is the `self` refering to the Pipeline object. Add arbitrary keyword arguments thereafter. When finishing with `return self` the compute method can easily be called with instantiation.
# + tags=[]
def computation(self, locdata, n_localizations_min=4):
# import required modules
from locan.analysis import LocalizationPrecision
# prologue
self.file_indicator = locdata.meta.file.path
self.locdata = locdata
# check requirements
if len(locdata)<=n_localizations_min:
return None
# compute localization precision
self.lp = LocalizationPrecision().compute(self.locdata)
return self
# -
# ### Run the analysis procedure
# Instantiate a Pipeline object and run compute():
# + tags=[]
pipe = lc.Pipeline(computation=computation, locdata=dat, n_localizations_min=4).compute()
pipe.meta
# -
# Results are available from Pipeline object in form of attributes defined in the compute function:
# + tags=[]
[attr for attr in dir(pipe) if not attr.startswith('__') and not attr.endswith('__')]
# + tags=[]
pipe.lp.results.head()
# + tags=[]
pipe.lp.hist();
print(pipe.lp.distribution_statistics.parameter_dict())
# -
# You can recover the computation procedure:
# + tags=[]
pipe.computation_as_string()
# -
# or save it as text protocol:
# + active=""
# pipe.save_computation(path)
# -
# The Pipeline object is pickleable and can thus be saved for revisits.
# ## Apply the pipeline on multiple datasets - a batch process
# Let's create multiple datasets:
# + tags=[]
path = lc.ROOT_DIR / 'tests/test_data/rapidSTORM_dstorm_data.txt'
print(path)
dat = lc.load_rapidSTORM_file(path=path)
locdatas = [lc.select_by_condition(dat, f'{min}<index<{max}') for min, max in ((0,300), (301,600), (601,900))]
locdatas
# -
# Run the analysis pipeline as batch process
# + tags=[]
pipes = [lc.Pipeline(computation=computation, locdata=dat).compute() for dat in locdatas]
# -
# As long as the batch procedure runs in a single computer process, the identifier increases with every instantiation.
# + tags=[]
[pipe.meta.identifier for pipe in pipes]
# -
# ### Visualize the combined results
# + tags=[]
fig, ax = plt.subplots(nrows=1, ncols=1)
for pipe in pipes:
pipe.lp.plot(ax=ax, window=10)
plt.show()
| docs/tutorials/notebooks/Analysis_pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: AVES
# language: python
# name: aves
# ---
# +
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import geopandas as gpd
# esto configura la calidad de la imagen. dependerá de tu resolución. el valor por omisión es 80
mpl.rcParams["figure.dpi"] = 96
# esto depende de las fuentes que tengas instaladas en el sistema.
mpl.rcParams["font.family"] = "Fira Sans Extra Condensed"
pd.set_option('display.max_columns', None)
# -
df_2020 = pd.read_csv('../data/external/conaset/Siniestros_de_transito_en_rutas_de_Chile_2020..csv')
df_2015_2019 = pd.read_csv('../data/external/conaset/Siniestros_de_transito_en_rutas_de_Chile_2015_-_2019..csv')
df = pd.concat([df_2020, df_2015_2019])
df
gdf = gpd.GeoDataFrame(
df, geometry=gpd.points_from_xy(df.X, df.Y))
gdf.plot()
gdf.Región.unique()
gdf.query('Región == "REGION LOS LAGOS"').Comuna.unique()
comunas_chilwe = ['DALCAHUE', 'CHONCHI', 'CASTRO', 'QUEMCHI', 'QUINCHAO', 'ANCUD', '<NAME>', 'QUELLON', 'QUEILEN']
gdf_chilwe = gdf.query('Comuna in @comunas_chilwe')
gdf_chilwe.query('Comuna in @comunas_chilwe').X.value_counts(dropna=False)
gdf_chilwe.query('Comuna in @comunas_chilwe').query('X.isna()')
gdf_chilwe.query('Comuna in @comunas_chilwe').query('X.notna()')
gdf_chilwe.Tipo_Accid.value_counts()
gdf_chilwe.groupby(['Tipo_Accid']).sum()
gdf_chilwe.Fallecidos.value_counts()
gdf_chilwe.query('Fallecidos > 0').X.isna().value_counts()
gdf_chilwe.query('Fallecidos > 0').query('X.isna()')
gdf_chilwe.query('Fallecidos > 0').query('X.notna()').explore()
| notebooks/902-vs-accidentes-trancito.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/sooriyapava/ISYS5002_portfolio/blob/main/tax_calculation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="bVMwNyExqHgH"
# Write a program the prompts the user for their income and calclates the tax payable.
#
# [Resident tax rates 2021–22
# Taxable income](https://www.ato.gov.au/rates/individual-income-tax-rates/)
#
# Income | Tax on this income
# ------------------|----------------------
# 0 – \$18,200 | Nil
# \$18,201 – \$45,000 | 19 cents for each \$1 over \$18,200
# \$45,001 – \$120,000 | \$5,092 plus 32.5 cents for each \$1 over \$45,000
# \$120,001 – \$180,000 | \$29,467 plus 37 cents for each \$1 over \$120,000
# \$180,001 and over | \$51,667 plus 45 cents for each \$1 over \$180,000
#
#
# + [markdown] id="kBNq83Zsc38R"
# Steps
#
# 1. Get the income
# 2. Calculate the tax payable
# + id="BSQnuKQvr-zw" colab={"base_uri": "https://localhost:8080/"} outputId="1d1cd854-ba20-488f-9855-9dc91453b5ee"
# Step 1 - Get the income
income = int(input("What is your income for the year? "))
print("Your income is " ,income)
# + colab={"base_uri": "https://localhost:8080/"} id="tj552y3EdJTq" outputId="21e707ee-bacf-4e04-f677-0ebfec98b6b7"
# Step 1 - Get the income
income = int(input("What is your income for the year? "))
print("Your income is ", income)
# Step 2 - Calculate the tax payable
# if you earn $18,200, then pay no tax
# else if you earn between $18,201 - $45,000,
# then pay 19 cents for every dollar over $18,200
if income <=18200:
tax_payable = 0
elif 18200 < income <= 45000:
tax_payable = 0.19 * (income - 18200)
elif 45000 < income <= 120000:
tax_payable = 5092 + 0.325 * (income - 45000)
elif 120000 < income <= 180000:
tax_payable = 29467 + 0.37 * (income - 120000)
elif 180000 < income:
tax_payable = 51667 + 0.45 * (income - 180000)
print("Your tax payable is ", tax_payable)
# + id="EZ89J6rZd-Lm"
def get_income():
'''
This function will prompt the user for the income and return the value
'''
income = int(input("What is your income for the year? "))
print("Your income is ", income)
return income
# + id="UbTclx2whWul"
def calculate_tax(income):
if income <= 18200:
tax_payable = 0
elif 18200 < income <= 45000:
tax_payable = 0.19 * (income - 18200)
elif 45000 < income <= 120000:
tax_payable = 5092 + 0.325 * (income - 45000)
elif 120000 < income <= 180000:
tax_payable = 29467 + 0.37 * (income - 120000)
elif 180000 < income:
tax_payable = 51667 + 0.45 * (income - 180000)
return tax_payable
# + colab={"base_uri": "https://localhost:8080/"} id="3SbDTkUKhqyd" outputId="11954e06-4486-4ca6-a637-6f1bf304b381"
# 'Main line' wouldn't usually put this here though
# Step 1
income = get_income()
# Step 2
tax = calculate_tax(income)
print("Your tax payable is: ", tax)
# + id="JzPczh3Yh_rD"
| tax_calculation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="iQjHqsmTAVLU"
# ## Exercise 3
# In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers.
#
# I've started the code for you -- you need to finish it!
#
# When 99.8% accuracy has been hit, you should print out the string "Reached 99.8% accuracy so cancelling training!"
#
# + colab={} colab_type="code" id="sfQRyaJWAIdg"
import tensorflow as tf
# YOUR CODE STARTS HERE
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs = {}):
if logs.get('acc') > 0.998:
print("\nReached 99.8% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
# YOUR CODE ENDS HERE
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# YOUR CODE STARTS HERE
training_images = training_images / 255.0
test_images = test_images / 255.0
training_images = training_images.reshape(60000, 28, 28, 1)
test_images = test_images.reshape(10000, 28, 28, 1)
# YOUR CODE ENDS HERE
model = tf.keras.models.Sequential([
# YOUR CODE STARTS HERE
tf.keras.layers.Conv2D(32, (3, 3), activation = 'relu', input_shape = (28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation = 'relu'),
tf.keras.layers.Dense(10, activation = 'softmax')
# YOUR CODE ENDS HERE
])
# YOUR CODE STARTS HERE
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model.fit(training_images, training_labels, epochs = 10, callbacks = [callbacks])
# YOUR CODE ENDS HERE
# -
| 1.Introduction to TensorFlow/Week3/Exercise_3_Question.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from matplotlib import pyplot as plt
from TCGA_files import *
from gtex import get_gtex_tissue
import seaborn as sns
import sys
from hsbmpy import topic_distr_isample, topic_distr_sample,get_file, get_tissue_style, get_max_available_L
from TCGA_files import get_tcga_tissue
#label = 'disease_type'
labels = ['primary_site', 'secondary_site']
label=labels[0]
algorithm = "topsbm"
directory='/home/fvalle/phd/datasets/breast_HDE/'
L = get_max_available_L(directory, algorithm)
df = pd.read_csv("%s/%s/%s_level_%d_topic-dist.csv"%(directory,algorithm,algorithm,L))
df.head()
# ### Specific topic
topic = 15
searchdf = df.sort_values(by="Topic %d"%topic, ascending=False).loc[:,['doc','Topic %d'%topic]]
searchdf.head()
#datatotest = queryFiles([f[0]+'.FPKM.txt.gz' for f in searchdf.values[:30] if f[1]>0.1])
#datatotest = queryFiles([f[0] for f in searchdf.values[:10]])
datatotest = pd.DataFrame(columns=['primary_site','secondary_site'])
for file in [f[0] for f in searchdf.values[:10]]:
datatotest=datatotest.append(get_gtex_tissue(file))
datatotest
makeTopicPie(datatotest, L, ['primary_site','secondary_site'])
df_file=pd.read_csv("files.dat", index_col=[0])
samples = []
for sample in df['doc']:
if 'Lung' in get_gtex_tissue(sample)['primary_site']:
samples.append(sample)
tissuedf = df[df['doc'].isin(samples)].drop('i_doc', axis=1)
tissuedf.mean(axis=0).sort_values(ascending=False)
# ## topic distr
for idoc in searchdf.index.values[:5]:
fig=plt.figure()
ax=fig.subplots()
topic_distr_isample(idoc,df,ax)
plt.show()
fig.savefig("topic_distr_%d.png"%idoc)
for idoc in np.random.randint(len(df.index), size=10):
topic_distr_isample(idoc, df)
# ## Topic distr
# ### kl
l=L
df_kl = pd.read_csv("%s/topsbm/topic-kl_%d.csv"%(directory,l), header=None)
df_kl.columns = ['first', 'second', 'kl']
df_kl.head()
df_cluster = pd.read_csv("%s/topsbm/topsbm_level_%d_clusters.csv"%(directory,l))
df_topics = pd.read_csv("%s/topsbm/topsbm_level_%d_topic-dist.csv"%(directory,l)).loc[:,df.columns[2:]]
df_files = pd.read_csv("%s/files.dat"%(directory), index_col=[0], header=0)
bins = np.linspace(-0.025,1.025,40)
sites = df_files[label].unique()
df_tissue_kl = pd.DataFrame(index=sites, columns=sites, dtype=float)
for tissue_row in sites:
cluster_row = df_files[df_files['primary_site']==tissue_row].index.values
for tissue_column in sites:
print(tissue_row, tissue_column)
cluster_column = df_files[df_files['primary_site']==tissue_column].index.values
datarc = df_kl[(df_kl['first'].isin(cluster_row) & df_kl['second'].isin(cluster_column))]['kl'].values
datacr = df_kl[(df_kl['first'].isin(cluster_column) & df_kl['second'].isin(cluster_row))]['kl'].values
df_tissue_kl.at[tissue_row,tissue_column]=(np.average(np.concatenate((datarc,datacr))))
h = sns.clustermap(df_tissue_kl.dropna(axis=0,how='all').dropna(axis=1, how='any'), cmap=sns.diverging_palette(15,250, n=15), metric='euclidean')
dn = h.dendrogram_col.dendrogram
h.fig.savefig("%s/topic_distr_kl_map.pdf"%directory)
import scipy.cluster.hierarchy as shc
fig = plt.figure(figsize=(12,8))
ax = fig.subplots()
ax.set_xlabel("kl correlation", fontsize=16)
dend = shc.dendrogram(h.dendrogram_col.linkage, labels=df_tissue_kl.columns, orientation='right', distance_sort='discending', ax=ax)
fig.savefig("%s/topic_dist_dendogram_level(%d).pdf"%(directory,l))
# ### Topic maps
l=L-1
df_topics = pd.read_csv("%s/%s/%s_level_%d_topic-dist.csv"%(directory,algorithm,algorithm,l))
df_files = pd.read_csv("%s/files.dat"%directory, index_col=0).dropna(how='all', axis=0)
df_topics.set_index('doc', inplace=True)
df_topics.insert(0,'tissue','')
df_topics.drop('i_doc', axis=1, inplace=True)
print(df_files.columns)
label = 'pathologic_stage'
for sample in df_topics.index.values:
df_topics.at[sample,'tissue']=("%s"%(get_file(sample,df_files)[label]))
# +
df_cmap = df_topics.sort_values(by='tissue').set_index('tissue').transpose()
df_cmap = df_cmap.subtract(df_cmap.mean(axis=1),axis=0)
#create a color palette with the same number of colors as unique values in the Source column
network_pal = sns.color_palette('husl',n_colors=len(df_cmap.columns))
#Create a dictionary where the key is the category and the values are the
#colors from the palette we just created
network_lut = dict(zip(df_cmap.columns, network_pal))
network_col = df_cmap.columns.map(network_lut)
cm = sns.clustermap(df_cmap, row_cluster=False, col_cluster=False, metric='euclidean', vmin=0, cmap='RdYlBu_r', col_colors=network_col)
cm.fig.savefig("%s/%s/MAP_level%d.pdf"%(directory,algorithm,l))
# -
df_topics = pd.read_csv("%s/%s/%s_level_%d_topic-dist.csv"%(directory,algorithm, algorithm,l))
df_topics.drop('i_doc', axis=1, inplace=True)
df_topics.set_index('doc', inplace=True)
for sample in df_topics.index.values:
df_topics.at[sample,'tissue']="%s"%(get_file(sample,df_files)[label])
# +
fig,ax = plt.subplots(figsize=(25,12))
for tissue in df_topics['tissue'].unique():
print(tissue)
marker, c, ls = get_tissue_style(tissue)
try:
df_topics[df_topics['tissue']==tissue].loc[:,df_topics.columns[0:]].mean(axis=0).plot(ls=ls,marker=marker, lw=2, ms=10, ax=ax, label=tissue, c=network_lut[df_files[df_files[label]==tissue][label][0]])
except:
print(*sys.exc_info())
ax.tick_params(rotation=90, labelsize=24)
ax.set_ylabel("$P(topic | tissue)$", fontsize=28)
#plt.xscale('log')
#plt.yscale('log')
# Shrink current axis by 20%
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a legend to the right of the current axis
ax.legend(fontsize=18, ncol=1, loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
fig.savefig("%s/%s/lifeplot_level%d.pdf"%(directory,algorithm,l))
# -
import findspark
findspark.init()
import pyspark as spark
import tensorflow as tf
from pyspark.sql.functions import udf,col
from pyspark.sql.types import StringType
try:
if sc:
sc.stop()
except:
pass
conf = spark.SparkConf().set('spark.driver.host','127.0.0.1')
sc = spark.SparkContext(master='local', appName='hSBM_topic-dist',conf=conf)
sql = spark.SQLContext(sc)
df_files_pd = pd.read_csv("%s/files.dat"%directory, index_col=0).dropna(how='all', axis=0)
df_topics = sql.read.option("header","true").csv("%s/%s/%s_level_%d_topic-dist.csv"%(directory,algorithm,algorithm,l))
df_files = sql.read.option("header","true").csv("%s/files.dat"%(directory))
df_topics = df_topics.withColumn('status', udf(lambda x: 'healthy' if 'GTEX' in x else 'tumour', StringType())(col('doc')))
df_topics = df_topics.withColumn('tissue', udf(lambda x: get_file(x, df_files_pd)[label], StringType())(col('doc')))
#df_topics = df_topics.withColumn('second_tissue', udf(lambda x: get_file(x, df_files_pd)[labels[1]], StringType())(col('doc')))
df_topics.registerTempTable("topic")
df_files.registerTempTable("file")
df_topic_list = sql.read.option("header","true").csv("%s/%s/%s_level_%d_topics.csv"%(directory,algorithm,algorithm,l))
# ### only health vs disease
dftf_h = tf.convert_to_tensor(sql.sql("SELECT * FROM topic WHERE status='healthy'").toPandas().drop(['doc','i_doc', 'status', 'tissue','second_tissue'], axis=1).astype(float).values)
dftf_d = tf.convert_to_tensor(sql.sql("SELECT * FROM topic WHERE status='tumour'").toPandas().drop(['doc','i_doc', 'status', 'tissue','second_tissue'], axis=1).astype(float).values)
with tf.Session() as sess:
results = sess.run(tf.sort([tf.math.reduce_mean(dftf_h,0),tf.math.reduce_mean(dftf_d,0)], axis=1, direction='DESCENDING'))
fig = plt.figure(figsize=(20,10))
plt.plot(results[0], marker='o', lw=0.1)
plt.plot(results[1],marker='x', lw=0.1)
plt.yscale('log')
plt.xscale('log')
plt.show()
df_topics
topic_cols=df_topics.columns[2:-3]
look_for = ['colon', 'colon-gtex','colon-tcga']
exprs = {x: "avg" for x in topic_cols}
df_tissue_healthy=df_topics.filter(col('tissue')==look_for[0]).select(topic_cols).agg(exprs).toPandas()[["avg(%s)"%topic for topic in topic_cols]]
df_tissue_disease=df_topics.filter(col('tissue')!=look_for[2]).select(topic_cols).agg(exprs).toPandas()[["avg(%s)"%topic for topic in topic_cols]]
df_topics.groupby('tissue').count().show()
means = df_topics.groupby(['status','tissue']).agg(exprs).agg({x: 'avg' for x in ["avg(%s)"%t for t in topic_cols]}).toPandas()[["avg(avg(%s))"%topic for topic in topic_cols]]
means.columns=topic_cols
sigmas = df_topics.groupby(['status','tissue']).agg({x: "std" for x in topic_cols}).agg({x: 'std' for x in ["stddev(%s)"%t for t in topic_cols]}).toPandas()[["stddev(stddev(%s))"%topic for topic in topic_cols]]
sigmas.columns=topic_cols
df_topics_grouped = df_topics.groupby(['status','second_tissue']).agg(exprs)
for topic in topic_cols:
plt.figure()
plt.title(topic)
healthy=(df_topics.filter(col('tissue')==look_for[0]).select([topic]).toPandas().astype(float)-means[topic].values)/sigmas[topic].values
disease=(df_topics.filter(col('tissue')!=look_for[0]).select([topic]).toPandas().astype(float)-means[topic].values)/sigmas[topic].values
plt.hist(healthy.values.T[0], density=True, histtype='step', label=look_for[0])
plt.hist(disease.values.T[0], density=True, histtype='step', label='Other')
plt.vlines([healthy.mean(),disease.mean().values],0,0.2,colors=['blue','orange'], linestyles=['dashed','dashed'])
plt.vlines([-3,3],0,0.2,colors=['k','k'])
plt.legend()
plt.show()
for g in df_topic_list.select("Topic 5").dropna().toPandas().values.T[0]:
print(g)
for topic in topic_cols:
plt.figure(figsize=(15,8))
plt.title(topic)
for tissue in df_topics.select('second_tissue').distinct().toPandas().values.T[0]:
tissue_spec=(df_topics.filter(col('second_tissue')==tissue).select([topic]).toPandas().astype(float)-means[topic].values)/sigmas[topic].values
plt.hist(tissue_spec.values.T[0], density=True, histtype='step', label=tissue)
plt.xscale('log')
plt.yscale('log')
plt.legend()
plt.show()
df_topics_grouped = df_topics.groupby('second_tissue').agg({x: 'avg' for x in topic_cols}).toPandas().set_index('second_tissue')[["avg(%s)"%t for t in topic_cols]].transpose()
df_topics_grouped.index=topic_cols
# +
df_cmap = df_topics_grouped
df_cmap=df_cmap.subtract(df_cmap.mean(axis=1), axis=0).divide(df_cmap.std(axis=1), axis=0)
df_cmap.sort_index(axis=1, inplace=True)
#df_cmap.sort_values(by=[c for c in df_cmap.columns[::2]], inplace=True)
#create a color palette with the same number of colors as unique values in the Source column
network_pal = sns.color_palette('husl',n_colors=len(df_cmap.columns))
#Create a dictionary where the key is the category and the values are the
#colors from the palette we just created
network_lut = dict(zip(df_cmap.columns, network_pal))
network_col = df_cmap.columns.map(network_lut)
fig = plt.figure()
cm = sns.clustermap(df_cmap, row_cluster=False, col_cluster=False, metric='euclidean', cmap='RdYlBu', col_colors=network_col)
cm.fig.savefig("%s/MAP_level%d.png"%(directory,l))
# -
fig=plt.figure(figsize=(10,8))
#plt.plot((df_tissue).values[0], label=look_for[0], ls='--', ms=10)
plt.hist((df_tissue_healthy).values[0], label=look_for[1], marker='x', lw=0.5, ms=10)
plt.hist((df_tissue_disease).values[0], label=look_for[2], marker='x', lw=0.5, ms=10)
plt.xticks(ticks=np.arange(len(topic_cols)), labels=topic_cols, rotation=90)
#plt.yscale('log')
plt.legend(fontsize=20)
plt.show()
(df_tissue_healthy-df_tissue).values
# ### all
df_all = tf.convert_to_tensor(sql.sql("SELECT * FROM topic").toPandas().drop(['i_doc', 'doc', 'status', 'tissue'], axis=1).astype(float).values)
#normed_df = tf.divide(tf.subtract(df_all,tf.reduce_mean(df_all,0)),tf.math.reduce_mean(df_all, 0))
#normed_df = tf.divide(tf.abs(tf.subtract(df_all,tf.reduce_mean(df_all,0))),tf.sqrt(tf.math.reduce_variance(df_all, 0)))
normed_df = tf.divide(tf.subtract(df_all,tf.reduce_mean(df_all,0)),tf.sqrt(tf.math.reduce_variance(df_all, 0)))
#normed_df = tf.divide(tf.math.divide(df_all,tf.reduce_mean(df_all,0)), tf.cast(tf.shape(df_all)[0], tf.float64))
#normed_df = tf.math.multiply(df_all,tf.reduce_mean(df_all,0))
result = normed_df.numpy()
fig=plt.figure(figsize=(30,15))
topics_i = np.linspace(0,df_all.shape[1]-1,df_all.shape[1]-1, dtype=int)
label = 'tissue'
for tissue in df_topics.select('tissue').distinct().toPandas().values.ravel():
print(tissue)
if tissue is None:
continue
marker, c, ls = get_tissue_style(tissue)
c = network_lut[tissue]
i_docs = sql.sql("SELECT i_doc, %s FROM topic WHERE %s='%s'"%(label, label,tissue)).select('i_doc').toPandas().astype(int).values.T[0]
plt.plot(np.mean(result[i_docs],axis=0)[topics_i], marker=marker, lw=0.8, ls=ls, label=tissue, ms=18, c=c)
#plt.hist(np.mean(result[0][i_docs],axis=0)[l_topics_i])
plt.legend(fontsize=18, ncol=3)
plt.ylabel("$\\frac{\\left|P(topic | tissue) - mean\\right|}{\sigma}$", fontsize=44)
plt.xticks(np.linspace(0,len(topics_i)-1,num=len(topics_i)), ["Topic %d"%(t+1) for t in topics_i], rotation=75, fontsize=24)
plt.tick_params(labelsize=24)
#plt.yscale('log')
plt.show()
fig.savefig("%s/%s/lifeplot_normalised_level%d.pdf"%(directory,algorithm,l))
for tissue in df_topics.select(label).distinct().toPandas().values.T[0][:]:
print(tissue)
fig=plt.figure()
plt.title(tissue)
df_visual = sql.sql("SELECT * FROM topic WHERE tissue='%s'"%tissue).toPandas().drop(['i_doc', 'doc', 'status', 'tissue','second_tissue'], axis=1).astype(float)
width = np.zeros(len(df_visual.index))
for t in df_visual.columns:
plt.barh(df_visual.index.values,df_visual[t].values,left=width)
width+=df_visual[t].values
plt.show()
fig.savefig("%s/%s/topicvisual_%s.png"%(directory,algorithm,tissue))
sc.stop()
| hSBM_topic-dist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center><h3>
# Note: As of <code>verta==0.15.10</code>, the APIs used in this notebook are outdated.
# </h3></center>
# # NYC Taxi Dataset on Atlas/Hive
# ## Environment
# +
try:
import verta
except ImportError:
# !pip3 install verta
try:
from pyhive import hive
except ImportError:
# !pip3 install pyhive
# !pip3 install thrift
# !pip3 install sasl
# !pip3 install thrift_sasl
from pyhive import hive
from __future__ import print_function
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import pandas as pd
# +
HOST = "app.verta.ai"
PROJECT_NAME = "NYC Taxi Demand Prediction"
# +
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
# -
# ---
# ## Read Connection Information for Atlas/Hive from Environment
# atlas_url = %env ATLAS_URL
# atlas_user_name = %env ATLAS_USER_NAME
# atlas_password = %env ATLAS_PASSWORD
# hive_url = %env HIVE_URL
# hive_password = %env HIVE_PASSWORD
print("Atlas username {}set".format('' if atlas_user_name else "NOT "))
print("Atlas password {}set".format('' if atlas_password else "NOT "))
print("Hive password {}set".format('' if hive_password else "NOT "))
[atlas_url, hive_url]
# ## Instantiate Client
# +
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
# -
# ## Create Dataset and Dataset Version
dataset = client.set_dataset("NYC Taxi Dataset on Atlas and Hive", type="atlas hive")
atlas_entity_endpoint = "/api/atlas/v2/entity/bulk"
atlas_guid = "d2fdde40-706f-44af-afde-155177b8d2e4"
version = dataset.create_version(atlas_guid,
atlas_url, atlas_user_name,
atlas_password)
# ---
# # Fetch Data from Hive
table_name = list(filter(lambda x: x.key=="table_name", version.attrs))[0].value.string_value
database_name = list(filter(lambda x: x.key=="database_name", version.attrs))[0].value.string_value
query = "select * from {}.{}".format(database_name, table_name)
# +
cursor = hive.connect(hive_url).cursor()
cursor.execute(query)
data = cursor.fetchall()
col_names = [x[0] for x in cursor.description]
data_frame = pd.DataFrame(data, columns=col_names)
data_frame.head()
# -
# ---
| client/workflows/demos/nyc-taxi-atlas-end-to-end.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Credit Card Fraud Detection
# ## Introduction
#
# It is important that credit card companies are able to recognize frauds on credit card transactions.
#
# On Kaggle, we have access to a dataset which contains transactions made by credit cards in September 2013 by europeans. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
#
# The features V1, V2, ..., V28 are the principal components obtained with PCA and all are numeric and confidentials.
# #### Dataset:
#
# https://www.kaggle.com/mlg-ulb/creditcardfraud
# ## Problem Statement
#
# Due to the fraudulent credit card transactions problem and your data, how good we can predict them?
#
# To solve this problem, we'll follow a standard data science pipeline plan of attack:
#
# #### 1. Understand the problem and the data
# #### 2. Data exploration
# #### 3. Feature engineering / feature selection
# #### 4. Model evaluation and selection
# #### 5. Model optimization
# #### 6. Interpretation of results and predictions
# ### Getting Start:
#
# Doing the necessary imports:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import f1_score
import warnings
# Let's ignore warnings about deprecated things:
warnings.filterwarnings("ignore")
# #### Reading the dataset:
card_transactions = pd.read_csv('creditcard.csv')
# ## Understand the problem and the data
#
# I will start seeing the shape and columns names of our dataset, to answer my question: How many features and instances do I have?
print(card_transactions.shape)
print('---------------------------------------------------------------------------')
print(card_transactions.columns)
# As mentioned before, the features passed by a PCA algorithm and they are confidentials. The name doesn't help us to understand.
#
# Let's check all feature __types__:
print(card_transactions.dtypes)
# All of them are numerical and it's coherent. So doesn't need any type of cast.
# ### Data Distribution
# On the Kaggle's challenge description of dataset, they tell this data have a unbalanced distribution. Let's check:
# +
count_classes = pd.value_counts(card_transactions['Class'], sort = True)
# Creating a plot with bar kind:
count_classes.plot(kind = 'bar', rot=0)
# Setting plotting title and axi's legends:
plt.title("Data Class distribution")
plt.xlabel("Class")
plt.ylabel("Frequency")
# -
# Now I'm sure, the data is totally unbalanced.
# __There are several ways to approach this unbalanced distribution problem:__
#
# - Collect more data. (Not applicable in this case)
# ***
# - Use metrics like F1, Precision, Recall and ROC
# - __Here is a link for a very good post talking about metrics for unbalanced data: https://towardsdatascience.com/what-metrics-should-we-use-on-imbalanced-data-set-precision-recall-roc-e2e79252aeba__
# ***
# - Resampling the dataset
#
# - This is as method that will process the data to have an approximate 50-50 ratio;
#
# - One way to anchieve this is OVER-sampling, adding copies of the under-represented class (better with __little__ data);
#
# - Another way is UNDER-sampling, deleting instances from the over-represented class (better with __lot's__ of data).
# ***
# ## Data exploration / Data cleaning
# ### Have any null value in the DataFrame?
# I'm going to check if have any value on instances with null values:
print(card_transactions.isnull().values.any())
# Well, I don't need to worry about treat null values.
# ### Analysis Fraud and Valid Transactions
# Determine the number of fraud and valid transactions:
# +
fraud_data = card_transactions[card_transactions['Class'] == 1]
normal_data = card_transactions[card_transactions['Class'] == 0]
print('Fraud shape: ' + str(fraud_data.shape))
print('Valid shape: ' + str(normal_data.shape))
# -
# #### How many percents each Class represents on this skewed distribution?
print('No Fraud: ', round(len(fraud_data)/len(card_transactions) * 100,2), '% of the dataset.')
print('Fraud: ', round(len(normal_data)/len(card_transactions) * 100,2), '% of the dataset.')
# #### How different are the amount of money used in different transaction classes?
# ##### Normal transactions:
print(fraud_data.Amount.describe())
# ##### Fraud transactions:
print(normal_data.Amount.describe())
# ## Feature engineering / feature selection
# I am not going to perform feature engineering or feature selection in first instance.
#
# The dataset already has been downgraded in order to contain 30 features (28 anonymous + time + amount). Acording to Kaggle's description, they used PCA as feature engineering to reduce number of features.
#
# The only thing I'm going to do is normalize the _Amount_. As we could see previously, have a lot of variantion on data.
amount_values = card_transactions['Amount'].values
standardized_amount = StandardScaler().fit_transform(amount_values.reshape(-1, 1))
card_transactions['normAmount'] = standardized_amount
card_transactions = card_transactions.drop(['Time', 'Amount'], axis=1)
print(card_transactions['normAmount'].head())
# ## Model evaluation and selection
#
# ### Approach:
# 1. Select Classifiers Algorithms to be used.
# ***
# 2. Compare what happens when using resampling techniques and when not using it.
# - Evaluate the models by using *_Stratified Cross Validation_ (for not resampled), normal Cross Validation (for resampled) and some of the performance metrics mentioned before.
# ***
# 3. Repeat the best resampling/not resampling method, by tuning the parameters.
# ***
#
# *_Stratified Cross Validation_ is a recommended CV technique to large imbalance in the distribution of the target class which the folds are made by preserving the percentage of samples for each class.
# ### Classifier Algorithms:
#
# I'm going to use these algorithms:
#
# * [Multi-layer Perceptron (MLPClassifier)](http://scikit-learn.org/stable/modules/neural_networks_supervised.html#multi-layer-perceptron)
# * [Random Forest Classifier (RandomForestClassifier)](http://scikit-learn.org/stable/modules/ensemble.html#random-forests)
# ### Not resampling:
# Spliting data in X set and Y set (target):
X = card_transactions.iloc[:, card_transactions.columns != 'Class']
y = card_transactions.iloc[:, card_transactions.columns == 'Class']
# I'm going to create a function to perform and evaluate the models.
# - As said before, going to use Stratified Cross Validation because the dataset distribution is imbalanced
# - Will evaluate models with metrics: F1 and ROC_AUC
# +
# This fuction will evaluate used models returning f1 and roc_auc scores averages
# of Stratified Cross Validation folders.
def evaluate_models(X, y):
# Creating dict to save scores to evaluate:
f1_scores['MLPClassifier'] = []
roc_scores['MLPClassifier'] = []
f1_scores['RandomForestClassifier'] = []
roc_scores['RandomForestClassifier'] = []
# Initializing Stratified Cross Validation:
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.3, random_state=42)
count = 0
for train_index, test_index in sss.split(X, y):
count += 1
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
perform_models(
[
MLPClassifier(solver='lbfgs'),
RandomForestClassifier(n_estimators=100, n_jobs=-1),
],
X_train, X_test,
y_train, y_test,
count
)
print('Results:')
for model in f1_scores.keys():
print(' ' + model + ' has f1 average: ' + str( sum(f1_scores[model]) / len(f1_scores[model]) ))
print(' ' + model + ' has roc_auc average: ' + str( sum(roc_scores[model]) / len(roc_scores[model]) ))
# Function to perform a list of models:
def perform_models(classifiers, X_train, X_test, y_train, y_test, count):
string = ''
print(str(count) + ' interaction:\n')
for classifier in classifiers:
# Creating key index in dict to save evaluation metrics value:
string += classifier.__class__.__name__
# Train:
classifier.fit(X_train, y_train)
# Predicting values with model:
predicteds = classifier.predict(X_test)
# Getting score metrics:
f1 = f1_score(y_test, predicteds)
roc = roc_auc_score(y_test, predicteds, average='weighted')
# Adding scores:
f1_scores[classifier.__class__.__name__].append(f1)
roc_scores[classifier.__class__.__name__].append(roc)
string += ' has f1: ' + str(f1) + ' roc_auc: ' + str(roc)+ '\n'
print(' ' + string)
string = ''
print('-----------------------------------------------------------------')
# -
# Now I'm going to create f1_scores and roc_scores dictionaries and call evaluate_models function:
# +
f1_scores = {}
roc_scores = {}
evaluate_models(X, y)
# -
# ### Resampling (with under-sampling):
# ## Model optimization
# ## Interpretation of results and predictions
| 03_Validation/Credit Card Fraud Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Week 4 - Key concepts in statistics and machine learning
#
# This tutorial is a no-code tutorial that will introduce you to the very fundamendals of statistics and machine learning. The definitions and techniques introduced here will be useful later in this course where we apply these techniques in real-life situations.
#
# ### Aims
#
# - Statistics
# - Define statistics and it's purpose
# - Define the Type I, Type II errors and the power of a statistical test
# - Differentiate between the H<sub>0</sub> and H<sub>1</sub>
# - Understand what the p-value is and how it is calculated
# - Decide whether to reject the null hypothesis or not
#
#
#
# - Machine learning
# - Distinguish between different types of machine learning problems
# - Understand the general framework of linear regression
# - Understand the general framework of linear regression
# ## Key concepts in statistics
# Statistics is a branch of applied mathematics that deals with collecting, organising, analysing, reading and presenting data. It can also be furtherly divided into:
#
# - **Data collection**: With what crietria are the data collected.
# - **Descriptive statistics**: How the data can be summarized and described.
# - **Inferential statistics**: Using the data to make predictions.
#
# Let's see all this steps in action: Assume that there is an **imaginary** organization where the SET is trying to estimate how many days each employ aims to spend on-site (assuming there are two regions, site A and site B). The idea is to check whether the same pattern is observed in both sites.
#
# Admin stuff is working very hard to gather accurate information and be able to predict the number of days the employees aim too spend on the site. In order to do that, they contact people by mail to ask them about their intentions. One key thing to address is <ins> how should they contact the public in order to ensure that people from different parts of the company are equally represented</ins> (*Data Collection*).
#
# Once the data have been collected, the admin stuff can have an initial idea about some key meassures. For instance, <ins> how do employees in a specific department intent to return to the site or how do people of a specific age group changed their working preferences after COVID? <ins> (*Descriptive statistics*).
#
# Once the results have been summarized and described, they can be used for prediction. For example, based on all the information collected from the emails, <ins> can we predict if the number of days that the employees want to work on site is the same between the two sites? <ins> (*Inferential statistics*)
# ### From sample to population
# In the above example, the big question is to predict the number of hours that employees aim to spend on site in each region. But as in the majority of the real-life problems, we will only have access to a fraction of the whole population (*sample*) rather than the whole population.
#
# **Statistics is the art of using the sample to make inference about the population**
#
#
# <img src="../img/sample_to_population.jpg"/>
#
# *Source: http://korbedpsych.com/R06Sample.html*
#
#
#
# We could only rely on the collected samples and make a verdict like: *In the sample we collecetd, we got that on site A, employess aim to spend 3 days/week in the office, while on site B it's 2.3 days/week, thus we declare that site A employees want to get back on site more days!*. While this might be a logical assumtion to follow, it completely ignores the following question:
#
# *What if we just got more willing site A employees just by chance?*
#
# In order to make inference about the whole population from our sample, we need to make some assumptions about the whole population.
# ### Defining a hypothesis: H<sub>0</sub> and H<sub>1</sub>
# Every problem in inferential statistics starts with a hypothesis. *A statistical hypothesis is a hypothesis that is testable on the basis of observed data modelled, as the realised values folowoing a set of assumptions.*
#
# In all the problems of this nature that we encounter here, there are two kinds of hhypotheses:
#
# - **H<sub>0</sub>**, or the **null hypothesis**: The default hypothesis, and the the quantity measured is usually equal to zero (null).
# - **H<sub>1</sub>**, or the **alternative hypothesis**: A hypothesis that contradicts the null hypothesis.
#
# The alternative hypothesis can fully or partialy contradict the null hypothesis. Coming back to our polling example:
#
# Some formulation of the null vs the alternative hypothesis could be:
#
# - *Option 1*:
# - H<sub>0</sub>: The mean number of hours that employees want to spend on-site on site A and B are (statistically) the same
# - H<sub>1</sub>: The mean number of hours that employees want to spend on-site on site A are (statistically) **greater** than the mean number of hours that employees want to spend on-site on site A
#
# - *Option 2*:
# - H<sub>0</sub>: The mean number of hours that employees want to spend on-site on site A and B are (statistically) the same
# - H<sub>1</sub>: The mean number of hours that employees want to spend on-site on site A are (statistically) **lower** than the mean number of hours that employees want to spend on-site on site A
#
# - *Option 3*:
# - H<sub>0</sub>: The mean number of hours that employees want to spend on-site on site A and B are (statistically) the same
# - H<sub>1</sub>: The mean number of hours that employees want to spend on-site on site A are (statistically) **different** than the mean number of hours that employees want to spend on-site on site A
#
#
# Depending on the scientific question, the formulation of the null and alternative hypothesis can change. One thing that remains constant in the majority of hypotheses formulation is that the null hypothesis **should stay null**. This is due to mathematical convinience, as it is easier to model.
#
# \
#
#
# Let's take a different example. A scientist is interested in investigating whether the light color (blue or red) has an effect on the plant growth. The scientists meassures the plants'growth in cm. Thus, the two hypothesis formulated here could be:
#
# - H<sub>0</sub>: Light color has no effect on plant growth
# - H<sub>1</sub>: Light color affects plant growth
#
# But, where is the zero in this null hypothesis? The zero is hidden in the more mathematical formulation of the above hypotheses. A data scientist/statistician would formulate the above hypotheses as :
#
# - H<sub>0</sub>: The difference in the growth of plants under blue light and the plants under red light is 0
# - H<sub>1</sub>: The difference in the growth of plants under blue light and the plants under red light is different than 0
#
# <img src="../img/null_vs_alternative_hypothesis.png" width="600">
#
# *Source: https://sciencenotes.org/null-hypothesis-examples/*
#
#
#
# ### Errors and its types
# A perfect test would have zero false positives and zero false negatives. However, statistical methods are probabilistic, and it cannot be known for certain whether statistical conclusions are correct. Whenever there is uncertainty, there is the possibility of making an error. Considering this nature of statistics science, all statistical hypothesis tests have a probability of making type I and type II errors.
#
# - The **type I error rate or significance level** is the probability of rejecting the null hypothesis given that it is true. It is denoted by the Greek letter α (alpha) and is also called the alpha level. Usually, the significance level is set to 0.05 (5%), implying that it is acceptable to have a 5% probability of incorrectly rejecting the true null hypothesis.
#
# - The **rate of the type II error** is denoted by the Greek letter β (beta) and is the probability of don't rejecting the null hypothesis while it is false.
#
# These two types of error rates are traded off against each other: for any given sample set, the effort to reduce one type of error generally results in increasing the other type of error.
#
# <img src="../img/error_types.PNG">
#
# *Source: Wikipedia*
#
# Another propability that is useful when designing a test is the **statistical power** of a test, which is also called the **probability of correct rejection**. It is calculated as 1-β (Type II error). The higher the statistical power for a given experiment, the lower the probability of making a Type II error. That is the higher the probability of detecting an effect when there is an effect. In fact, the power is precisely the inverse of the probability of a Type II error.
#
# It is common to design experiments with a statistical power of 80% or better, e.g. 0.80. This means a 20% probability of encountering a Type II area. This different to the 5% likelihood of encountering a Type I error for the standard value for the significance level.
# ### The statistic fucntion and the p-value of the test
# Now that we have defined some key definitions of a test, we will explore how a test is performed and how decisions are made based on its output.
#
# One of the most important quantities of a test is the **statistic function**. It is a function of the data (and possibly other parameters) that follows a pre-defined and known distribution. It is considered as a numerical summary of a data-set that reduces the data to one value that can be used to perform the hypothesis test. Examples of various statistic fucntions can be found in this [Wikipedia article](https://en.wikipedia.org/wiki/Test_statistic).
#
# As an example, we present the statistic function of the *two-independent samples t-test*. This test is used when we want to compare the means of two independent populations (like in our on-site employees example, where we want to calculate and compare the mean number of hours people aim to spend on-site on each site). The statistic function is
#
# $$ t = \frac{(\bar{x_1} - \bar{x_2}) - d}{\sqrt{\frac{s{_1}^2}{n_1} + \frac{s{_2}^2}{n_2}}} $$
#
# and is symbolized by $t$. A small legend:
#
# - $\bar{x_1}$: The mean number of hours for employees on site A in the sample
# - $\bar{x_2}$: The mean number of hours for employees on site B in the sample
# - $\bar{s{_1}^2}$: The variance of the number of hours for employees on site A in the sample
# - $\bar{s{_2}^2}$: The variance of the number of hours for employees on site B in the sample
# - $n_1$: Number of people asked from site A in the sample
# - $n_2$:Number of people asked from site B in the sample
# - $d$: The desired difference we want in our null hypothesis (in this case, we assumed equality, so this is 0)
# Once we have calculated the value of the statistic funtion, we can calculate how "likely" this value is, as we have already defined a distribution for our statistic function. Thus, we want to see how far or close we are from the tolerance level we have set (*remember α, this is our tolerance*).
#
#
# The area under any probability distribution is equal to 1. And as the x-axis is ordered, we can calculate the probability the value we observed is greater or lower than the value of the tolerance we have set.
#
# <img src="../img/normal_distribution.png">
#
# *Source: https://blogs.sas.com/content/iml/2019/07/22/extreme-value-normal-data.html*
#
# But depending on the design of our alternative hypothesis, we want to allocate our tolerance accordingly. Let's go back to our pollsters example. Here are the 3 different optiions we preseneted for the alternative hypothesis:
#
# In the first scenario, we only reject the null hypothesis when the percentage of votes for Candidate A is (statistically) **greater** than Candidate's B. This means that we can allocate our tolerance **only on the right part of the distribution**, and all of the 5% will be alocated on the right (for option 2, we allocate out tolerance on the left part of the distribution).
#
# For option 3, we want to allocate our tolerance both in the right and the left tail of our distribution. Assuming that we have a tolerance equal to α, we are spliting the tolerance equally as: α/2 on the right and α/2 on the left.
#
# <img src ="../img/rejection_area.jpg">
#
# For any model our statistic function follows, the probability of observing a value more extereme (either lower or higher) than we have observed (the actual value of the $t$ fucntion in this case) is called **p-value**.
# ### To reject, or not to reject?
# At this stage we, have calculated everything; the t-statistic, the p-value, and we have a pre-set α value. But how are we making a decision about whether to reject or not the null hypothesis?
#
# *Be aware, we **NEVER** accept the null hypothesis, we simply do not have enough evidence to reject it!*
#
# The p-value is an indication of how "strong" (or likely) H<sub>0</sub> is. Thus, if the p-value **is greater than the tollerance level α** then our indication towards H<sub>0</sub> is strong and we cannot reject it. In contrast, if the p-value is lower than α, then we can safely reject the null hypothesis.
# ## Key concepts in Machine Learning
# Machine learning (ML) is the study of computer algorithms that can improve automatically through experience and by the use of data.
#
# *The major difference between machine learning and statistics is their purpose. Machine learning models are designed to make the most accurate predictions possible. Statistical models are designed for inference about the relationships between variables.*
#
# Machine learning is all about results, it is likely working in a company where your results are characterized solely by your how good they are. Whereas, statistical modeling is more about finding relationships between variables and the significance of those relationships, whilst also catering for prediction.
#
# <img src=../img/machine_learning.png>
#
# *Source: https://datascience.stackexchange.com/questions/42621/data-science-related-funny-quotes*
# ### Types of machine learning problems
#
# Not all machine learning problems are created equal, nor they use the same methods to make predictions. Given the problems nature, available data and the reserahcer's goal, a machine learning problem can fall into one of the following categories:
#
# - **Supervised learning**: Supervised learning (SL) is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. In supervised learning, each example is a pair consisting of an input and a desired output value. A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations.
#
# The most widely used learning algortithms are:
# - Support-vector machines
# - **Linear regression**
# - **Logistic regression**
# - Naive Bayes
# - Decision trees
# - K-nearest neighbor algorithm
# - Neural networks (Multilayer perceptron)
#
#
# - **Unsupervised learning**: Unsupervised learning (UL) is a type of machine learning in which the algorithm is not provided with any pre-assigned labels or scores for the training data. As a result, unsupervised learning algorithms must first self-discover any naturally occurring patterns in that training data set. Advantages of unsupervised learning include a minimal workload to prepare and audit the training set, in contrast to supervised learning techniques where a considerable amount of expert human labor is required to assign and verify the initial tags, and greater freedom to identify and exploit previously undetected patterns.
#
# The most widely used learning algortithms are:
# - Hierarchical clustering
# - K-means
# - Mixture models
#
#
# - **Reinforcement learning**:Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning differs from supervised learning in not needing labelled input/output pairs be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
# ### A brief introduction to linear regression
# **Linear regression** is a linear approach for modelling the relationship between a collection of explanatorty variables (also known as predictors or independent variables) and a scalar value (also known as dependent variables or response). The case of one explanatory variable is called *simple linear regression*; for more than one, the process is called *multiple linear regression*.
#
# In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Such models are called linear models. The most basic linear model is
#
# $$ y_i = \beta_0 + \beta_1 * x_i + \epsilon_i $$
#
#
# where:
#
# - $y$ is the **dependent** variable
# - $x$ is the **independent** variable
# - $\beta_0 \; \text{and} \; \beta_1$ are the coefficients of the regression
# - $\epsilon_i $ is the error term (*sometimes also called noise*)
#
# <img src="../img/linear_least_squares.png">
#
# *Source: Wikipedia*
#
# *The observations (red) are assumed to be the result of random deviations (green) from an underlying relationship (blue) between a dependent variable (y) and an independent variable (x).*
# While linear regression is the way-to-go for many problems involving a relationship between numerical variables, there are some underlying assumptions behind these model. These include:
#
# - **Linearity**: This means that the mean of the response variable is a linear combination of the parameters (regression coefficients) and the predictor variables (more on this on the ML tutorial).
# - **Lack of perfect multicollinearity in the predictors**: For standard least squares estimation methods, the predictors (if more than one)must have full column rank p; otherwise perfect multicollinearity exists in the predictor variables, meaning a linear relationship exists between two or more predictor variables.
# - A few more that require further deep dive in statistical theory and they are outside the scope of this course.
# The purpose of linear regression is to estimate the values of $\beta_p, \; p= 0,1,...,M$. This will give a model in which whenever we feed it a value for every predictor, it will be able to *predict* a value for the target variable $y$. Let's go through an example:
#
# Let's assume that we want to model the relationship between the time a student studies, the time his sleeps and his final exams grades. In this case, we want to predict the exam grade, so the grade is the **dependent variable**, while the other two variables are the **independent** variables. Ideally, we we end up with a model in the follwoing form:
#
# $$ \text{final_exam_grade} = \beta_0 + \beta_1 * \text{how_many_hours_a_student_studied} + \beta_2* \text{how_many_times_a _student_slept} $$
#
# The interpretation of the $ \beta $ are the following:
#
# - $\beta_0$: What would be the grade of a student that spent 0 hours in studying and sleeping (extreme case)
# - $\beta_1$: How much the grade of a student would improve if he added ONE extra hour of **studying** in his schedule
# - $\beta_2$: How much the grade of a student would improve if he added ONE extra hour of **sleep** in his schedule
# ### A brief introduction to logistic regression
# In many cases, the problem that we aim to solve is slightly different that he one we solved with the linear regression. Let's take the students example we posed above. Instead of predicting the exact grade of each student, we want to predict if the student *pass or fails* the exam.
#
# This turns our problem into a *classification* problem; we want to classify each data point (in this case, the students) in one of the two classes. This class of problems give birth to the **logistic regression** model.
#
# The logistic model (or logit model) is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. Each datapoint would be assigned a probability between 0 and 1, with a sum of one for each of the two classes.
# Similar to linear regresison, we want to use our predictors (the **independent** variable(s)) to make predictions about the **dependent** varriable, which in this case is either 0 or 1. Istead of trying to estimate an exact value, we model the *odds*, which translate to *how more probable is for a data point to belong on class 1 rather than class 0*? This can be mathematically modelled as:
#
# $$ \text{odds} = \frac{p}{1-p} $$
#
# where $p$ is the probability of a data point belonging to class 1. To make the connection with the linear regression (on its simpler case), the final form of the model is:
#
# $$ log_b(\frac{p}{1-p}) = \beta_0 + \beta_1*x_1$$
# and with simple algebraic operation, we can calculate the probability of class one as:
#
# $$ p = \frac{b^{\beta_0 + \beta_1*x_1}}{b^{\beta_0 + \beta_1*x_1} + 1}$$
#
# where $b$ is the base of the logarithm we choose (most common choices are 2 or 10, no need to worry about it).
#
#
# <img src="../img/logistic_curve.jpeg">
#
# *Source: Wikipedia*
# **Remember that th elogistic model returns probabilities and not class assignment!** Many people assume that this model returns class assignments, e.g. in the students example, whether the student pass/fail the exam.
#
# The researcher usually sets their own threshold (usually 0.5, unless there are some condition outside the scope of this course).
#
# Finally, another difference between the linear and the logistic regression is the interpretation of the $\beta$ coefficients. In logisti regression:
#
# - $\beta_0$: It is the log-odds of the event that the data points belogning in class 1 , when all the other predictors are set to 0.
# - $\beta_i$: This is number that the **logarith of odds** are increased when the value of the predictor i increases by 1.
| notebooks/week4_lecture.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# My custom library of photoelastic tools
import sys
sys.path.append('/home/jack/workspaces/jupyter-workspace/pepe/')
from pepe.visualize import visCircles, visContacts
from pepe.visualize import genRandomDistancedColors
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 140
from IPython.display import clear_output
# +
centers = [[100, 200], [200, 300], [300, 150]]
radii = [100, 75, 60]
ax = visCircles(centers, radii, annotations=[1, 2, 3], setBounds=True)
# +
forceArr = [1, 3, 2]
betaArr = [0, np.pi, 2]
alphaArr = [0, 0, 0]
c = centers[0]
r = radii[0]
ax = visContacts(c, r, alphaArr, betaArr, forceArr, setBounds=True)
# -
| notebooks/vis_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Notebook
# +
import numpy as np
import random
import time
import glob
import os
import sys
import unittest
import collections
from collections import Counter
import dash
from dash.dependencies import Output, Input
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objs as go
import plotly.express as px
from plotly.subplots import make_subplots
# +
# #%%timeit
#every list comprehension can be rewritten in for loop, but every for loop can’t be rewritten in the form of list comprehension.
def connected_hostnames(logpath, init_datetime, end_datetime, Hostname):
connected_hosts=[]
input_log=open(logpath)
for line in input_log:
#Check if within the interval
if (init_datetime<= int(line.split()[0]) <= end_datetime):
#check if the host initialized the connection and append the receiver if true.
if(line.split()[1]==Hostname):
connected_hosts.append(line.split()[2])
#check if the host received the connection and append the initializer if true.
elif (line.split()[2]==Hostname):
connected_hosts.append(line.split()[1])
#Exit early: finish the process if the interval is exceeded
elif ((int(line.split()[0]) > end_datetime )):
break
input_log.close()
return collections.Counter(connected_hosts)
'''
def connected_hostnames_one_liner(filepath, init_datetime, end_datetime, Hostname):
s=[line.split()[2] if (init_datetime <= int(line.split()[0]) <= end_datetime and line.split()[1]==Hostname) else line.split()[1] if (init_datetime<= int(line.split()[0]) <= end_datetime and line.split()[2]==Hostname) else None for line in reversed(list(open(filepath)))]
return list(filter(None, s))
'''
# -
# %%timeit
connected_hostnames("data/input-file.txt",1565647205599,1565679364288, 'Jadon')
# %%timeit
connected_hostnames_one_liner("data/input-file.txt",1565647205599,1565679364288, 'Jadon')
def connected_to(logpath, init_datetime, end_datetime, Hostname):
hostnames=[]
input_log=open(logpath)
for line in reversed(list(input_log)):
#print(''.join(['parsed line: ',line]))
if (int(line.split()[0]) >= init_datetime and int(line.split()[0])<= end_datetime and line.split()[2]==Hostname):
#print(''.join(['----> considered line: ',line]))
hostnames.append(line.split()[1])
if(int(line.split()[0]) < init_datetime ):
break
#print('------------------ \n\n')
input_log.close()
return collections.Counter(hostnames)
def received_from(logpath, init_datetime, end_datetime, Hostname):
hostnames=[]
input_log=open(logpath)
for line in reversed(list(input_log)):
#print(''.join(['parsed line: ',line]))
if (int(line.split()[0]) >= init_datetime and int(line.split()[0])<= end_datetime and line.split()[1]==Hostname):
#print(''.join(['----> considered line: ',line]))
hostnames.append(line.split()[2])
if(int(line.split()[0]) < init_datetime ):
break
#print('------------------ \n\n')
input_log.close()
return collections.Counter(hostnames)
def generated_conn(logpath, init_datetime, end_datetime):
hostnames=[]
input_log=open(logpath)
for line in reversed(list(input_log)):
#print(''.join(['parsed line: ',line]))
if (int(line.split()[0]) >= init_datetime and int(line.split()[0])<= end_datetime ):
#print(''.join(['----> considered line: ',line]))
hostnames.append(line.split()[1])
if(int(line.split()[0]) < init_datetime ):
break
#print('------------------ \n\n')
input_log.close()
return collections.Counter(hostnames)
pigar
# +
'''
strings in Python are immutable, and the “+” operation involves creating a new string and copying the old content
at each step. A more efficient approach would be to use the array module to modify the individual characters and
then use the join() function to re-create your final string.
'''
def process_log_files(Hostname, past_time, log_ofo_time):
#can achieve the same effect slightly faster by using while 1. This is a single jump operation, as it is a numerical comparison.
while 1:
connected_hosts, received_hosts, active_hosts=Counter(),Counter(),Counter()
init_datetime=int((time.time()-past_time)*1000)
end_datetime=int(time.time()*1000)
past= time.time() - 5 # 5 seconds
past_files=sorted( [ filename for filename in glob.glob("output/*.txt") if os.path.getmtime(filename)>=init_datetime/1000-log_ofo_time ] , key=os.path.getmtime)[::-1]
for filename in past_files:
connected_hosts+=connected_to(filename,init_datetime,end_datetime,Hostname)
received_hosts+=received_from(filename,init_datetime,end_datetime,Hostname)
active_hosts+=most_generated_conn(filename,init_datetime,end_datetime)
'''
## Data transformation for display :
#converting 2d list into 1d , and consider multiple occurences by applying collection
connected_hosts=collections.Counter(sum(connected_hosts,[]))
received_hosts=collections.Counter(sum(received_hosts,[]))
#convert to collection to include other hosts if they have similar occurences as the first one.
active_hosts= collections.Counter(sum(active_hosts,[]))
'''
active_hosts=[h for h in active_hosts.most_common() if h[1]==active_hosts.most_common(1)[0][1]]
print(" ".join(['Hosts that connected to ', Hostname ,'in the last', str(past_time),'s are: ',str(connected_hosts),'\n']))
print(" ".join(['Hosts that received connection from', Hostname ,'in the last', str(past_time),'s are: ',str(received_hosts),'\n']))
print(" ".join(['the hostname that generated most connections in the last', str(past_time),'s is: ', str(active_hosts),'\n']))
print('--------------------------------\n\n')
print(''.join(['It is : ', time.strftime('%X %x'),'. the next output is in ', str(past_time), ' s. \n']))
time.sleep(past_time)
# -
process_log_files('Hannibal', 5000 , 0 )
# +
class NamesTestCase(unittest.TestCase):
# Test connected_hostnames() on short and long files
def test_connected_hostnames_sf(self):
result = connected_hostnames("data/input_test_case_1.txt",1607880434801,1607880438820, 'Steeve')
self.assertEqual(result, {'Hanny': 1, 'Hannibal': 2})
def test_connected_hostnames_lf(self):
result = connected_hostnames("data/input-file.txt",1565647204351,1565733598341, 'Dristen')
self.assertEqual(result, {'Aadison': 1, 'Wilkens': 1, 'Kahlina': 1, 'Alei': 1, 'Zhanasia': 1, 'Jamor': 1, 'Joy': 1})
# Test connected_to() on short and long files
def test_connect_to_sf(self):
result = connected_to("data/input_test_case_1.txt",1607880434801,1607880438820, 'Steeve')
self.assertEqual(result, {'Hannibal': 1})
def test_connect_to_lf(self):
result = connected_to("data/input-file.txt",1565647204351,1565733598341, 'Jadon')
self.assertEqual(result, {'Ahmya': 1, 'Kayleann': 1, 'Shainah': 1, 'Aniyah': 1, 'Eveleigh': 1, 'Caris': 1, 'Rahniya': 1, 'Remiel': 1})
# Test received_from() on short and long files
def test_received_from_sf(self):
result = received_from("data/input_test_case_1.txt",1607880434801,1607880438820, 'Steeve')
self.assertEqual(result, {'Hannibal': 1, 'Hanny': 1})
def test_received_from_lf(self):
result = received_from("data/input-file.txt",1565647204351,1565733598341, 'Dristen')
self.assertEqual(result, {'Joy': 1, 'Jamor': 1, 'Zhanasia': 1, 'Alei': 1, 'Kahlina': 1, 'Wilkens': 1, 'Aadison': 1})
# Test generated_conn
def test_generated_conn(self):
result = generated_conn("data/input_test_case_1.txt",1607880434801,1607880438820)
self.assertEqual(result, {'Hannibal': 3, 'Steeve': 2, 'Hanny': 1})
if __name__ == '__main__':
unittest.main(argv=['first-arg-is-ignored'], exit=False)
# -
connected_hostnames("data/input_test_case_1.txt",1607880434801,1607880438820, 'Steeve')
connected_hostnames("data/input-file.txt",1565647204351,1565733598341, 'Dristen')
connected_to("../data/input_test_case_1.txt",1607880434801,1607880438820, 'Steeve')
s=connected_to("../data/input-file.txt",1565647204351,1565733598341, 'Jadon')
s.values()
received_from("data/input_test_case_1.txt",1607880434801,1607880438820, 'Steeve')
received_from("data/input-file.txt",1565647204351,1565733598341, 'Dristen')
generated_conn("data/input_test_case_1.txt",1607880434801,1607880438820)
most_generated_conn("data/input-file.txt",1565647204351,1565733598341).most_common(1)
# +
import logging
import threading
import time
def thread_function(name):
logging.info("Thread %s: starting", name)
time.sleep(2)
logging.info("Thread %s: finishing", name)
if __name__ == "__main__":
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,
datefmt="%H:%M:%S")
threads = list()
for index in range(3):
logging.info("Main : create and start thread %d.", index)
x = threading.Thread(target=thread_function, args=(index,))
threads.append(x)
x.start()
for index, thread in enumerate(threads):
logging.info("Main : before joining thread %d.", index)
thread.join()
logging.info("Main : thread %d done", index)
# -
def generated_conn_dash(logpath, init_datetime, end_datetime):
hostnames=[]
input_log=open(logpath)
for line in reversed(list(input_log)):
#print(''.join(['parsed line: ',line]))
if (int(line.split()[0]) >= init_datetime and int(line.split()[0])<= end_datetime ):
#print(''.join(['----> considered line: ',line]))
hostnames.append(line.split()[1])
if(int(line.split()[0]) < init_datetime ):
break
#print('------------------ \n\n')
input_log.close()
return hostnames
# +
class Dashboard:
def __init__:
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
colors = {
'background': '#111111',
'text': '#7FDBFF'
}
app.layout = html.Div(style={'backgroundColor': colors['background']}, children=[
html.H1(
children='Network log analytics',
style={
'textAlign': 'center',
'color': colors['text']
}
),
html.Div(
[
html.Div(
[
html.H2("""Select a host:""",
style={'margin-right': '1em', 'color': colors['text']})
],
),
dcc.Dropdown(
id='hosts_dropdown',
options=[
{'label': 'Hannibal', 'value': 'Hannibal'},
{'label': 'Hanny', 'value': 'Hanny'},
{'label': 'Steeve', 'value': 'Steeve'}
],
placeholder="Default value 'Hannibal'",
value="Hannibal",
style=dict(width='40%',display='inline-block')
)
],
style={'display': 'flex', 'align-items': 'center'}
),
dcc.Graph(id='live-graphs_host'),
dcc.Interval(id='graph-update', interval=0.5*10000)
])
#global connected_hosts, received_hosts, active_hosts
@app.callback(
Output("live-graphs_host", "figure"),
Input(component_id='hosts_dropdown', component_property='value'),
Input('graph-update', 'n_intervals'))
def update_output(value, interval):
log_ofo_time =0
init_datetime=int((time.time()-4)*1000)
end_datetime=int(time.time()*1000)
past_files=sorted( [ filename for filename in glob.glob("../output/*.txt") if os.path.getmtime(filename)>=init_datetime/1000-log_ofo_time ] , key=os.path.getmtime)[::-1]
for filename in past_files:
set_connected_hosts(connected_to(filename,init_datetime,end_datetime,value))
set_received_hosts(received_from(filename,init_datetime,end_datetime,value))
set_active_hosts(generated_conn(filename, init_datetime, end_datetime))
fig = make_subplots( rows=2, cols=2,
specs=[[{"type": "domain"}, {"type": "domain"}],
[{"colspan": 2}, None]],
subplot_titles=("Generated connections","received connections", "total number of connections of all hosts")
)
fig.add_trace(go.Pie(labels=list(get_connected_hosts.keys()), values=list(get_connected_hosts.values()), textinfo='label+value', name='connected to', hole=.65),
row=1, col=1)
fig.add_trace(go.Pie(labels=list(get_received_hosts.keys()), values=list(get_received_hosts.values()), textinfo='label+value', name='received from', hole=.65),
row=1, col=2)
fig.add_trace(go.Bar(x=list(get_active_hosts.keys()), y=list(get_active_hosts.values()), name="All connections", marker=dict(color='orange', coloraxis="coloraxis") ),
row=2, col=1)
#fig.update_layout(
#title_text="Host that connected and received connection from the selected host")
return fig
# +
# -
app.run_server(host='0.0.0.0', port=8080,debug=True, use_reloader=False)
# +
class Dashboard:
def __init__(self):
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
self.app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.callback(Output("live-graphs_host", "figure"),
Input(component_id='hosts_dropdown', component_property='value'),
Input('graph-update', 'n_intervals')) (update_output)
colors = {
'background': '#111111',
'text': '#7FDBFF'
}
self.app.layout = html.Div(style={'backgroundColor': colors['background']},
children=[
html.H1(
children='Network log analytics',
style={
'textAlign': 'center',
'color': colors['text']
}
),
html.Div(
[
html.Div(
[
html.H2("""Select a host:""",
style={'margin-right': '1em', 'color': colors['text']})
],
),
dcc.Dropdown(
id='hosts_dropdown',
options=[
{'label': 'Hannibal', 'value': 'Hannibal'},
{'label': 'Hanny', 'value': 'Hanny'},
{'label': 'Steeve', 'value': 'Steeve'}
],
placeholder="Default value 'Hannibal'",
value="Hannibal",
style=dict(width='40%',display='inline-block')
)
],
style={'display': 'flex', 'align-items': 'center'}
),
dcc.Graph(id='live-graphs_host'),
dcc.Interval(id='graph-update', interval=0.5*10000)
])
def set_connected_hosts(self,x):
self.connected_hosts+=x
def get_connected_hosts(self):
return self.connected_hosts
def set_received_hosts(self,x):
self.received_hosts+=x
def get_received_hosts(self):
return self.received_hosts
def set_active_hosts(self,x):
self.active_hosts+=x
def get_active_hosts(self):
return self.active_hosts
def get_app(self):
return self.app
# +
dsh = Dashboard()
app= dsh.get_app()
@app.callback(
Output("live-graphs_host", "figure"),
Input(component_id='hosts_dropdown', component_property='value'),
Input('graph-update', 'n_intervals'))
def update_output(value, interval):
log_ofo_time =0
init_datetime=int((time.time()-4)*1000)
end_datetime=int(time.time()*1000)
past_files=sorted( [ filename for filename in glob.glob("../output/*.txt") if os.path.getmtime(filename)>=init_datetime/1000-log_ofo_time ] , key=os.path.getmtime)[::-1]
for filename in past_files:
set_connected_hosts(dsh,connected_to(filename,init_datetime,end_datetime,value))
set_received_hosts(dsh,received_from(filename,init_datetime,end_datetime,value))
set_active_hosts(dsh,generated_conn(filename, init_datetime, end_datetime))
fig = make_subplots( rows=2, cols=2,
specs=[[{"type": "domain"}, {"type": "domain"}],
[{"colspan": 2}, None]],
subplot_titles=("Generated connections","received connections", "total number of connections of all hosts")
)
fig.add_trace(go.Pie(labels=list(dsh.get_connected_hosts().keys()), values=list(dsh.get_connected_hosts().values()), textinfo='label+value', name='connected to', hole=.65),
row=1, col=1)
fig.add_trace(go.Pie(labels=list(dsh.get_received_hosts().keys()), values=list(dsh.get_received_hosts().values()), textinfo='label+value', name='received from', hole=.65),
row=1, col=2)
fig.add_trace(go.Bar(x=list(dsh.get_active_hosts().keys()), y=list(dsh.get_active_hosts().values()), name="All connections", marker=dict(color='orange', coloraxis="coloraxis") ),
row=2, col=1)
#fig.update_layout(
#title_text="Host that connected and received connection from the selected host")
return fig
# -
app.run_server(host='0.0.0.0', port=8080,debug=True, use_reloader=False)
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# +
app.layout = html.Div(
[
html.Label(['File processign App']),
dcc.Dropdown(
id='hosts_dropdown',
options=[
{'label': 'Hannibal', 'value': 'Hannibal'},
{'label': 'Hanny', 'value': 'Hanny'},
{'label': 'Steeve', 'value': 'Steeve'}
],
value="Hannibal",
),
dcc.Graph(id='live-graphs_host'),
dcc.Interval(id='graph-update', interval=0.5*10000)
]
)
# -
'''
#when ‘n_intervals’ of the ‘graph-update’ changes, I will use the update_graph_scatter(input_data) function, to update the ‘figure’ of the ‘live_graph’.
@app.callback(Output('live-graph', 'figure'),
[Input('graph-update', 'n_intervals')])
def update_graph_scatter(input_data):
global hostnames
log_ofo_time =0
init_datetime=int((time.time()-4)*1000)
end_datetime=int(time.time()*1000)
past_files=sorted( [ filename for filename in glob.glob("../output/*.txt") if os.path.getmtime(filename)>=init_datetime/1000-log_ofo_time ] , key=os.path.getmtime)[::-1]
for filename in past_files:
hostnames=np.concatenate((hostnames,np.array(generated_conn_dash(filename, init_datetime, end_datetime))))
#connected_hostss+=connected_to(filename,init_datetime,end_datetime,'Hannibal')
unique, val=np.unique(hostnames, return_counts=True)
figure = px.histogram(hostnames, range_y=[0, max(val)+10])
return figure
'''
# +
hosnamesss= Counter()
init_datetime=int((time.time()-1000)*1000)
end_datetime=int(time.time()*1000)
past_files=sorted( [ filename for filename in glob.glob("../output/*.txt") if os.path.getmtime(filename)>=init_datetime/1000 ] , key=os.path.getmtime)[::-1]
for filename in past_files:
#s=np.array(generated_conn_dash(filename, init_datetime, end_datetime))
hosnamesss+=generated_conn(filename, init_datetime, end_datetime)
#connected_hostss+=connected_to(filename,init_datetime,end_datetime,'Hannibal')
# -
hosnamesss
# +
list(hosnamesss.values())
# +
go.B
# +
random_x
# +
connectedhosts=Counter()
log_ofo_time =0
init_datetime=int((time.time()-500)*1000)
end_datetime=int(time.time()*1000)
past_files=sorted( [ filename for filename in glob.glob("../output/*.txt") if os.path.getmtime(filename)>=init_datetime/1000-log_ofo_time ] , key=os.path.getmtime)[::-1]
for filename in past_files:
#hostnames=np.concatenate((hostnames,np.array(generated_conn_dash(filename, init_datetime, end_datetime))))
#connected_hostss+=connected_to(filename,init_datetime,end_datetime,'Hannibal')
connectedhosts+=connected_to(filename,init_datetime,end_datetime,'Hannibal')
# -
dict(connectedhosts).items()
# +
s=pd.DataFrame(dict(connectedhosts).items(), columns=['Name', 'Value'])
# -
s
s.pivot_table(s, values='Value', columns = 'Name')
px.pie(data_frame=s,values='Value',names='Name')
connectedhosts+=Counter({'Hanny': 12, 'Steeve': 3, 'Hannibal': 4})
np.array(list(connectedhosts.values()))
px.pie(list(connectedhosts.values()),labels=list(connectedhosts.keys()))
px.histogram(x=list(connected_hosts.keys()),y=list(connected_hosts.values()), range_y=[0, max(connected_hosts.values())+40])
len(np.unique(hosnamess))
input1='anass'
print(u'Input 1 {}'.format(input1))
unique, val=np.unique(hosnamess, return_counts=True)
val
hosnamess
np.histogram(list(hosnamess),bins=range(0, 60, 5))
for line in reversed(list(open('../output/log_1.txt'))):
#print(''.join(['parsed line: ',line]))
#if (int(line.split()[0]) >= int(1607948683153) and int(line.split()[0])<= int(1607948883412) ):
#print(''.join(['----> considered line: ',line]))
#hostnames.append(line.split()[1])
print(line)
X.append(X[-1]+1)
Y.append(Y[-1]+Y[-1]*random.uniform(-0.1,0.1))
data = plotly.graph_objs.Scatter(
x=list(X),
y=list(Y),
name='Scatter',
mode= 'lines+markers'
)
return {'data': [data],'layout' : go.Layout(xaxis=dict(range=[min(X),max(X)]),
yaxis=dict(range=[min(Y),max(Y)]),)}
# +
import numpy as np
data = np.random.normal(3, 2, size=500)
# +
hosts=['Hannibal', 'Hanny', 'Steeve']
x,y=random.sample(hosts, 2)
# -
y
# +
'''
return {
'data': [go.Pie(labels=list(connected_hosts.keys()),
values=list(connected_hosts.values()),
#marker=dict(colors=colors),
textinfo='label+value',
hole=.7,
)],}
'''
# -
| docs/Code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Quantum Circuits to Generate the Collapses
# To generate the collapses of the overlapping blocks, we will generate three types of circuits that represent the possibilities of the blocks as they have been splited.
#
# The tiles can be separated into up to 4 sub-pieces, in the following combinations:
#
# - 50% , 50%
# - 50% , 25% , 25%
# - 25% , 25% , 25% , 25%
#
# That order will be maintained in the sub-piece list so that they can always refer to the same probabilities all the time.
#
# That is, the possible lists to indicate the superposed blocks will be:
#
# - 2 blocks:
# - `[ block_50 , None , None , block_50 ]`
# - 3 blocks:
# - `[ block_50 , block_25 , None , block_25 ]`
# - `[ block_25 , None , block_25 , block_50 ]`
# - 4 blocks
# - `[ block_25 , block_25 , block_25 , block_25 ]`
#
# The order to consider the result will be:
#
# $$\big[ \, |00\rangle , |01\rangle , |10\rangle , |11\rangle \, \big]$$
#
# Then for example, if the simulation gives us state $|01\rangle$, then the block in position `1` will be the one that prevails so it will be the one that remains after the collapse.
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# ## Circuit for two blocks: 50% - 50%
# +
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
# create a Bell state
qc.h(0)
qc.cx(0,1)
qc.measure(q, c)
result_state = list(execute(qc,Aer.get_backend('qasm_simulator'), shots=1).result().get_counts(qc).keys())[0]
# for display purposes of this notebook
print(result_state)
display(qc.draw('mpl'))
job = execute(qc,Aer.get_backend('qasm_simulator'), shots=1000)
counts = job.result().get_counts(qc)
print(counts)
# end for display purposes of this notebook
# -
# ## Circuit for three blocks: 50% - 25% - 25%
# ### `[ block_50 , block_25 , None , block_25 ]`
# +
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
# apply first Hadamard
qc.h(0)
qc.measure(q, c)
# apply second Hadamard if the measurement outcome is 0
qc.h(1).c_if(c, 1)
qc.measure(q, c)
result_state = list(execute(qc,Aer.get_backend('qasm_simulator'), shots=1).result().get_counts(qc).keys())[0]
# for display purposes of this notebook
print(result_state)
display(qc.draw('mpl'))
job = execute(qc,Aer.get_backend('qasm_simulator'), shots=1000)
counts = job.result().get_counts(qc)
print(counts)
# end for display purposes of this notebook
# -
# ### `[ block_25 , None , block_25 , block_50 ]`
# +
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
# apply first Hadamard
qc.x(1)
qc.h(0)
qc.measure(q, c)
# apply second Hadamard if the measurement outcome is 0
qc.h(1).c_if(c, 2)
qc.measure(q, c)
result_state = list(execute(qc,Aer.get_backend('qasm_simulator'), shots=1).result().get_counts(qc).keys())[0]
# for display purposes of this notebook
print(result_state)
display(qc.draw('mpl'))
job = execute(qc,Aer.get_backend('qasm_simulator'), shots=1000)
counts = job.result().get_counts(qc)
print(counts)
# end for display purposes of this notebook
# -
# ## Circuit for case: 25% - 25% - 25% - 25%
# +
q = QuantumRegister(2)
c = ClassicalRegister(2)
qc = QuantumCircuit(q, c)
# apply Hadamards
qc.h(0)
qc.h(1)
qc.measure(q, c)
result_state = list(execute(qc,Aer.get_backend('qasm_simulator'), shots=1).result().get_counts(qc).keys())[0]
# for display purposes of this notebook
print(result_state)
display(qc.draw('mpl'))
job = execute(qc,Aer.get_backend('qasm_simulator'), shots=1000)
counts = job.result().get_counts(qc)
print(counts)
# end for display purposes of this notebook
# +
arr = ["hol", None, "bye", "otros"]
print(arr.index("hol"))
# -
| media/Quantum Circuits to Generate the Collapses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python
# coding: utf-8
# In[1]:
import pandas as pd
import numpy as np
import math
import random
import matplotlib.pyplot as plt
plt.style.use("seaborn-darkgrid")
import os
# In[2]:
def coordinates_on_circle(n):
"""Returns x,y coordinates of points on an unit circle with spacing 2π/n"""
if type(n)!=int:
raise Exception("Wrong input: \n the argument must be an integer number of points on the unit circle!")
x,y = [],[]
step_angle = 2*math.pi/n
for i in range(0,n):
x.insert(len(x),math.cos(i*step_angle))
y.insert(len(y),math.sin(i*step_angle))
return x,y
# In[3]:
def create_starting_graph(n,r):
if type(r)!=int:
raise Exception("Wrong input: \n r must be an integer number of edges between vertices")
if r>n-1:
raise Exception("Wrong input: \n r must not exceed n-1!")
coords = coordinates_on_circle(n)
#create adjacency_matrix as pandas df
#Initialize adjacency matrix
adj_mat_df = pd.DataFrame(np.zeros([n,n]),dtype='bool')
#Make starting connections with pbc
for i in range(0,n):
#left
if(i-r>=0):
adj_mat_df.iloc[i][i-r:i] = True
else:
diff = r-i
adj_mat_df.iloc[i][0:i] = True
adj_mat_df.iloc[i][n-diff:n+1] = True
#right
if(i+r<n):
adj_mat_df.iloc[i][i+1:i+r+1] = True #+1 to avoid self loop and up to sym value
else:
diff = i+r-n
adj_mat_df.iloc[i][i+1:n+1] = True
adj_mat_df.iloc[i][0:diff+1] = True
return adj_mat_df
# In[4]:
def create_strogatz(n, r ,p, place_labels=False ):
"""Plots the graph of the Strogatz model on a unit circle."""
#Procedure to create results folder automatically
path = os.getcwd()
results_dir = "/results_WS"
try:
os.mkdir(path+results_dir)
except OSError:
print ("Creation of the directory %s failed" % results_dir)
else:
print ("Successfully created the directory %s " % results_dir)
#names for file paths
name_plot = "/plot_n%d_r%d_p%.3f.png" %(n,r,p)
name_csv = "/data_n%d_r%d_p%.3f.csv" %(n,r,p)
name_plot_rewired = "/plot_rewired_n%d_r%d_p%.3f.png" %(n,r,p)
name_csv_rewired = "/data_rewired_n%d_r%d_p%.3f.csv" %(n,r,p)
#check for errors
if p>1 or p<0:
raise Exception("Wrong input: \n p must be in [0,1]")
coords = coordinates_on_circle(n)
adj_mat = create_starting_graph(n,r)
labels_nodes = []
nodes_coords = coordinates_on_circle(n)
#figure settings
fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(16,9))
plt.subplots_adjust(wspace=0.3)
plt.suptitle("WS(N=%d; 2r = %d), Starting configuration"%(n,2*r),fontsize=25)
#plot graph
for i in range(0,n):
connections_list = adj_mat[adj_mat.iloc[i] == True].index.tolist()
#print(connections_list)
for k in range(0,len(connections_list)):
ax1.plot([nodes_coords[0][i],nodes_coords[0][connections_list[k]]],[nodes_coords[1][i],nodes_coords[1][connections_list[k]]],linewidth=.5,color='indianred')
ax1.plot(nodes_coords[0],nodes_coords[1],color='steelblue',ls='none',marker='o',markersize=10,label=labels_nodes)
ax1.set_title("Graph representation",fontsize=20)
ax1.set_xticks([])
ax1.set_yticks([])
#labels on vertices
if place_labels==True:
for i in range(0,n):
labels_nodes.append("%d"%i)
ax1.text(nodes_coords[0][i],nodes_coords[1][i],labels_nodes[i],fontsize=15)
#plot adjacency matrix
ax2.set_xlabel("Edges",fontsize=20)
ax2.set_ylabel("Vertices",fontsize=20)
ax2.matshow(adj_mat,cmap='cividis')
ax2.set_title("Adjacency matrix",fontsize=25)
#save things!
adj_mat.to_csv(path+results_dir+name_csv,header=False, index=False)
plt.savefig(path+results_dir+name_plot,dpi=200)
plt.show()
#print("PRE REWIRING:",sum(adj_mat))
#rewiring! (anticlockwise, for sake of indices)
for i in range(0,n):
#print("working on row # %d"%(i))
#edge_list = list(adj_mat[adj_mat.iloc[i] == True].index.tolist())
#edge_list = [k for k in edge_list if k > i]
for j in range(0,r): #for each link to vertex i
if (random.random()<p): #attempt a rewire
#performing the rewire
# - Choose which of the connected edge to rewire->deleated_edge
# - Choose were to rewire it among the available positions->candidates
# - Perform the connection/deleate old connection/update mirror adjmat
#choose which edge to remove: [+periodic boundary conditions]
deleated_edge = i+1+j
if deleated_edge>n-1:
deleated_edge = deleated_edge-n
#chose available position:
candidates = list(adj_mat[adj_mat.iloc[i] == False].index.tolist())
candidates.remove(i) #take out self loop
new_edge = random.choice(candidates)
#print("candidates list = ",candidates)
#print("new edge chosen = ",new_edge)
#create new wire
adj_mat.iloc[i][new_edge]=True
adj_mat.iloc[new_edge][i]=True
#deleate old wire
adj_mat.iloc[i][deleated_edge]=False
adj_mat.iloc[deleated_edge][i]=False
#print("AFTER REWIRING:",sum(adj_mat))
#Plot rewired
fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(16,9))
plt.subplots_adjust(wspace=0.3)
plt.suptitle("WS(N=%d; 2r = %d; p = %.3f)"%(n,2*r, p),fontsize=25)
#plot graph
for i in range(0,n):
connections_list = adj_mat[adj_mat.iloc[i] == True].index.tolist()
#print(connections_list)
for k in range(0,len(connections_list)):
ax1.plot([nodes_coords[0][i],nodes_coords[0][connections_list[k]]],[nodes_coords[1][i],nodes_coords[1][connections_list[k]]],linewidth=.5,color='indianred')
ax1.plot(nodes_coords[0],nodes_coords[1],color='steelblue',ls='none',marker='o',markersize=10,label=labels_nodes)
ax1.set_title("Graph representation",fontsize=20)
ax1.set_xticks([])
ax1.set_yticks([])
#labels on vertices
if place_labels==True:
for i in range(0,n):
labels_nodes.append("%d"%i)
ax1.text(nodes_coords[0][i],nodes_coords[1][i],labels_nodes[i],fontsize=15)
#plot adjacency matrix
ax2.set_xlabel("Edges",fontsize=20)
ax2.set_ylabel("Vertices",fontsize=20)
ax2.matshow(adj_mat,cmap='cividis')
ax2.set_title("Adjacency matrix",fontsize=25)
#save things!
adj_mat.to_csv(path+results_dir+name_csv_rewired,header=False, index=False)
plt.savefig(path+results_dir+name_plot_rewired,dpi=200)
plt.show()
return adj_mat
# In[82]:
# -
n,r,p=500,3,.3
adj_mat = create_strogatz(n,r,p)
s = adj_mat.sum(axis = 0, skipna = True)
f = sorted(s, reverse = True)
plt.hist(f,bins=20)
# +
from scipy.stats import chisquare
from scipy.stats import chi2
from scipy.stats import norm
from scipy.stats import poisson
from scipy.stats import powerlaw
from scipy.misc import factorial
adj_data = pd.read_csv("Adj_a.csv",header=None)
#def psn(x,lbd):
# return np.power(lbd,x)*math.exp(-lbd)/factorial(x, exact=False)
# -
# +
len(adj_data)
plt.figure(figsize=(10,6))
s = adj_data.sum(axis = 0, skipna = True)
f = sorted(s/sum(s), reverse = True)
plt.plot(f,marker='.',ls=None)
x = (np.linspace(0,len(f)-1, len(f)))
#plt.plot(x,poisson.pmf(x, 1, 0))
plt.semilogx()
plt.semilogy()
#plt.xlim(0,10)
# -
sum(f_counts_elements)
def normal_distrib(adj_mat):
s = adj_mat.sum(axis = 0, skipna = True)
f_unique_elements, f_counts_elements = np.unique(s, return_counts=True)
observed = f_counts_elements/sum(f_counts_elements)
p_result = 0
chi_result = 10000000000000000
mu_norm=-10000
sigma_norm=0
for i in range(1,100):
for j in range(1,100):
expected = norm.pdf(x,i,j)
chi_statistic, p_value = chisquare(observed, expected)
if p_value>p_result and chi_statistic<chi_result:
mu_norm, sigma_norm = i,j
p_result, chi_result= p_value, chi_statistic
plt.plot(x,observed,label="Data")
plt.plot(x,norm.pdf(x,mu_norm,sigma_norm),label="Normal Distribution")
plt.legend()
print(mu_norm,sigma_norm,p_result,chi_result)
chisquare(f_counts_elements, norm.pdf(f_unique_elements),6,)
#POISSON
def poiss_distrib(adj_mat):
s = adj_mat.sum(axis = 0, skipna = True)
f_unique_elements, f_counts_elements = np.unique(s, return_counts=True)
observed = f_counts_elements/sum(f_counts_elements)
x = f_unique_elements
p_result = 0
chi_result = 10000000000000000
mu_poisson=-10
for i in range(0,100):
expected = poisson.pmf(x, i, 0)
chi_statistic, p_value = chisquare(observed, expected)
if p_value>p_result and chi_statistic<chi_result:
mu_poisson = i
p_result, chi_result= p_value, chi_statistic
print(p_result,chi_result)
print(mu_poisson)
plt.plot(x,observed,label="Data")
plt.plot(x,poisson.pmf(x,mu_poisson,0),label="Poisson Distribution")
plt.legend()
poiss_distrib(adj_mat)
normal_distrib(adj_mat)
# +
def powpow(x,a,b):
if b<0:
return a * 1/np.power(x,b)
else:
return a * np.power(x,b)
def powlaw(adj_mat):
s = adj_mat.sum(axis = 0, skipna = True)
f_unique_elements, f_counts_elements = np.unique(s, return_counts=True)
observed = f_counts_elements/sum(f_counts_elements)
x = f_unique_elements
p_result = 0
chi_result = 10000000000000000
for j in range(1,100):
for i in range(0,100):
expected = powpow(x, i,j)
chi_statistic, p_value = chisquare(observed, expected)
if p_value>p_result and chi_statistic<chi_result:
a,b = i,j
p_result, chi_result= p_value, chi_statistic
print(p_result,chi_result)
print(a,b)
plt.plot(x,observed,label="Data")
plt.plot(x,powpow(x,a,b),label="Power Distribution")
plt.legend()
# -
powlaw(adj_data)
powpow(2,3)
x=[2,3,4,5]
powpow(x,-2,-3)
| Code/trial!.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Prepare data
#
# rewrite_train_data_json => Convert anotation ()
# !pip install -U albumentations --quiet
# +
import os
import numpy as np
import pandas as pd
from collections import Counter
from sklearn.model_selection import train_test_split
import joblib
import json
import cv2
from time import time
import threading
import math
import pickle
from glob import glob
from time import time
# coding=utf-8
from __future__ import absolute_import, print_function
import torch
from sklearn import metrics
from DataSet.dataset import get_iwildcam_loader, data_prefetcher
from Utils.train_utils import cross_entropy,focal_loss, get_optimizer
from Utils.train_utils import mixup_data, mixup_criterion
from Models.model_factory import create_model
import warnings
warnings.filterwarnings("ignore")
os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES'] = "0"
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('device:', device)
# -
DATASET={'CCT':'iWildCam_2019_CCT','iNat':'iWildCam_2019_iNat_Idaho','IDFG':'iWildCam_IDFG'} #_images_small
DATA_DIR='./data/'
ANNOTATION_DIR =DATA_DIR+ 'iWildCam_2019_Annotations/'
# +
def rewrite_train_data_json(dataset='CCT'):
json_path=ANNOTATION_DIR+DATASET[dataset]+'.json'
json_data = json.load(open(json_path,'r'))
images = json_data['images']
annotations = json_data['annotations']
csv_data={'category_id':[],'date_captured':[],'id':[],'file_name':[],
'rights_holder':[],'width':[],'height':[],'location':[]}
print('len of data:',dataset,len(images))
for ii,(img, annot) in enumerate(zip(images,annotations)):
if img['id'] != annot['image_id']:
print('there are some error in',ii,img['id'],annot['image_id'])
if 'date_captured' in img:
date=img['date_captured']
elif 'datetime' in img:
date = img['datetime']
else:
date = json_data['info']['date_created']
csv_data['date_captured'] += [date]
csv_data['category_id'] += [annot['category_id']]
csv_data['file_name'] += [img['file_name']]
csv_data['rights_holder'] += [img['rights_holder']]
csv_data['id'] += [img['id']]
csv_data['width'] += [img['width']]
csv_data['height'] += [img['height']]
if 'location' in img:
locat = img['location']
else:
locat=-1
csv_data['location'] += [locat]
csv_data = pd.DataFrame(csv_data)
csv_data.to_csv(ANNOTATION_DIR+DATASET[dataset]+'.csv',index=False)
def split_train_dev(CCT=True,iNat=True):
columns=['category_id','date_captured','id','file_name',
'rights_holder','width','height','location']
train=pd.DataFrame()
if CCT:
temp=pd.read_csv(ANNOTATION_DIR+DATASET['CCT']+'.csv')[columns]
temp['dataset'] = 'CCT'
temp['file_name'] = temp['file_name'].map(lambda x:'iWildCam_2019_CCT_images_small/'+x)
print('use CCT data',temp.shape)
train=pd.concat([train,temp])
if iNat:
temp=pd.read_csv(ANNOTATION_DIR+DATASET['iNat']+'.csv')[columns]
temp['dataset'] = 'iNat'
temp['file_name'] = temp['file_name'].map(lambda x: 'iWildCam_2019_iNat_Idaho/' + x)
print('use iNat data',temp.shape)
train=pd.concat([train,temp])
print('train shape',train.shape)
#train=train.sample(frac=1,random_state=0).reset_index(drop=True)
dev_file = train[train['location'] == 46] # 46
train_file = train[train['location'] != 46]
train_file.to_csv(DATA_DIR+'train_file.csv',index=False)
dev_file.to_csv(DATA_DIR+'dev_file.csv',index=False)
print('category ratio for train data:')
cnt = Counter(train_file['category_id'].values)
L = len(train_file)
for ii in range(23):
print(ii, cnt[ii], cnt[ii] / L)
print('category ratio for dev data:')
cnt = Counter(dev_file['category_id'].values)
L = len(dev_file)
for ii in range(23):
print(ii, cnt[ii], cnt[ii] / L)
def save_test():
columns=['date_captured','id','file_name',
'rights_holder','width','height','location']
test = pd.read_csv(DATA_DIR+'test.csv')[columns]
test['dataset'] = 'test'
test['category_id'] = -1
test['file_name'] = test['file_name'].map(lambda x:'test_images/'+x)
print('test shape',test.shape) #153730
test.to_csv(DATA_DIR+'test_file.csv',index=False)
full_data_dir='data/raw_data/iWildCam_2019_IDFG/iWildCam_IDFG_images/'
def get_test_orig_size_split(test_file,name=0):
name=str(name)
print('get_test_orig_size_split for thread',name,test_file.shape)
file_names= test_file['file_name'].values
width,height=[],[]
t1=time()
for ii,fname in enumerate(file_names):
mod_name =full_data_dir + fname.split('/')[-1]
image = cv2.imread(mod_name)
s = image.shape
#imageHeight = s[0]
#imageWidth = s[1]
width.append(s[0])
height.append(s[1])
if ii%100==0:
print('threads %s, index %d, time-cost %f min'%(name,ii,(time()-t1)/60))
if ii % 1000 == 0:
joblib.dump([ii,width,height],DATA_DIR+'raw_data/test_size_temp_{}.pkl'.format(name))
test_file['width']=width
test_file['height'] = height
print(name,'test shape',test_file.shape) #153730
test_file.to_csv(DATA_DIR+'raw_data/test_file_orig_{}.csv'.format(name),index=False)
def get_test_size_multi_thread(thread_num=1):
test_file = pd.read_csv(DATA_DIR+'test_file.csv')
test_file['small_width']=test_file['width']
test_file['small_height'] = test_file['height']
chunk=math.ceil(len(test_file)/thread_num)
thread_list=[]
for ii in range(thread_num):
sup_file=test_file.iloc[ii*chunk:(ii+1)*chunk]
thr=threading.Thread(target=get_test_orig_size_split,args=(sup_file,ii))
thread_list.append(thr)
for t in thread_list:
t.setDaemon(True)
t.start()
for t in thread_list:
t.join()
def merge_test_size_file():
data=pd.DataFrame()
for name in range(10):
data_path=DATA_DIR + 'raw_data/test_file_orig_{}.csv'.format(str(name))
temp=pd.read_csv(data_path)
data=pd.concat([data,temp])
print(name,data.shape)
data.to_csv(DATA_DIR + 'raw_data/test_file.csv',index=False)
# -
def prepare_data(CCT=True,iNat=True):
if CCT:
rewrite_train_data_json('CCT')
if iNat:
rewrite_train_data_json('iNat')
split_train_dev(CCT=CCT,iNat=iNat)
save_test()
prepare_data()
# +
TRAIN_DATASET={'CCT':'iWildCam_2019_CCT','iNat':'iWildCam_2019_iNat_Idaho','IDFG':'iWildCam_IDFG'} #_images_small
DATA_DIR='./data/'
ANNOTATION_DIR =DATA_DIR+ 'iWildCam_2019_Annotations/'
bbox_detect_dir='data/bbox/'
# -
def crop_image(img_names,ws,ids,img2det,bbox_detect_dir):
print('images num:',len(img_names))
print('detection num:', len(img2det))
t1=time()
miss=0
for ii in range(len(img_names)):
img_file = img_names[ii]
if os.path.exists(bbox_detect_dir + 'bbox_temp/' + img_file):
continue
dirs = img_file.split('/')
for jj in range(len(dirs)):
now_dir = '/'.join(dirs[:jj])
temp_dir = bbox_detect_dir + 'bbox_temp/' + now_dir
if not os.path.exists(temp_dir):
os.mkdir(temp_dir)
crop_dir = bbox_detect_dir + 'cropped_image/' + now_dir
if not os.path.exists(crop_dir):
os.mkdir(crop_dir)
img_id = ids[ii]
image = cv2.imread(DATA_DIR+img_file)
iBox = 0
try:
box = img2det[img_id][iBox]
except KeyError as e:
print(e)
miss+=1
# with open(bbox_detect_dir+'bug_img/img_list.txt','a') as f:
# f.write(img_file+'\t'+img_id+'\n')
continue
imageWidth = image.shape[1]
ratio = imageWidth / ws[ii]
box_new = [x * ratio for x in box]
buffer_scale=0.2
ww = max(0, int((box_new[3] - box_new[1]) * buffer_scale))
hh = max(0, int((box_new[2] - box_new[0]) * buffer_scale))
topRel = int(max(0, box_new[0] - hh))
leftRel = int(max(0, box_new[1] - ww))
bottomRel = int(box_new[2] + hh)
rightRel = int(box_new[3] + ww)
cropped = image.copy()[leftRel:rightRel, topRel:bottomRel] #.copy()
if len(cropped) == 0:
with open(bbox_detect_dir + 'cropped_image/zero_crop.txt','a') as f:
f.write(img_file+'\n')
cv2.imwrite(bbox_detect_dir + 'cropped_image/' + img_file,np.array([0]))
else:
cv2.imwrite(bbox_detect_dir + 'cropped_image/' + img_file,cropped)
#img_det = cv2.rectangle(image.copy(),(topRel, leftRel), (bottomRel, rightRel), (0, 255, 0), 3)
#cv2.imwrite(bbox_detect_dir + 'bbox_temp/' + img_file, img_det)
if ii%1000==0:
print('processing image',ii,(time()-t1)/1000)
t1 = time()
print('all miss data',miss)
print('load train ctt model')
img2det_cct = {}
with open(bbox_detect_dir+'Detection_Results/CCT_Detection_Results_1.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det_cct[img] = res[:10]
with open(bbox_detect_dir+'Detection_Results/CCT_Detection_Results_2.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det_cct[img] = res[:10]
# Train CTT
train_file=pd.read_csv(DATA_DIR+'train_file.csv')
dev_file = pd.read_csv(DATA_DIR + 'dev_file.csv')
data_file=pd.concat([train_file,dev_file])
data_cct = data_file[data_file['dataset']=='CCT'].reset_index(drop=True)
img_names = data_cct['file_name'].values
ids = data_cct['id'].values
ws = data_cct['width'].values
crop_image(img_names, ws, ids, img2det_cct, bbox_detect_dir)
print('load train inat model')
img2det_inat = {}
with open(bbox_detect_dir+'Detection_Results/iNat_Idaho_Detection_Results.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det_inat[img] = res[:10]
print('load train test model')
img2det_test = {}
with open(bbox_detect_dir + 'Detection_Results/IDFG_Detection_Results_1.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det_test[img] = res[:10]
with open(bbox_detect_dir + 'Detection_Results/IDFG_Detection_Results_2.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det_test[img] = res[:10]
# +
data_inat = data_file[data_file['dataset']=='iNat'].reset_index(drop=True)
img_names = data_inat['file_name'].values
ids = data_inat['id'].values
ws = data_inat['width'].values
crop_image(img_names, ws, ids, img2det_inat, bbox_detect_dir)
# +
def crop_image(img_names,ws,ids,img2det,bbox_detect_dir):
print('images num:',len(img_names))
print('detection num:', len(img2det))
t1=time()
miss=0
for ii in range(len(img_names)):
img_file = img_names[ii]
if os.path.exists(bbox_detect_dir + 'bbox_temp/' + img_file):
continue
dirs = img_file.split('/')
for jj in range(len(dirs)):
now_dir = '/'.join(dirs[:jj])
temp_dir = bbox_detect_dir + 'bbox_temp/' + now_dir
if not os.path.exists(temp_dir):
os.mkdir(temp_dir)
crop_dir = bbox_detect_dir + 'cropped_image/' + now_dir
if not os.path.exists(crop_dir):
os.mkdir(crop_dir)
img_id = ids[ii]
image = cv2.imread(DATA_DIR+img_file)
iBox = 0
try:
box = img2det[img_id][iBox]
except KeyError as e:
print(e)
miss+=1
# with open(bbox_detect_dir+'bug_img/img_list.txt','a') as f:
# f.write(img_file+'\t'+img_id+'\n')
continue
if not image:
# with open(bbox_detect_dir + 'image_none/.txt','a') as f:
# f.write(img_file+'\n')
cv2.imwrite(bbox_detect_dir + 'cropped_image/' + img_file,np.array([0]))
continue
imageWidth = image.shape[1]
ratio = imageWidth / ws[ii]
box_new = [x * ratio for x in box]
buffer_scale=0.2
ww = max(0, int((box_new[3] - box_new[1]) * buffer_scale))
hh = max(0, int((box_new[2] - box_new[0]) * buffer_scale))
topRel = int(max(0, box_new[0] - hh))
leftRel = int(max(0, box_new[1] - ww))
bottomRel = int(box_new[2] + hh)
rightRel = int(box_new[3] + ww)
cropped = image.copy()[leftRel:rightRel, topRel:bottomRel] #.copy()
# if len(cropped)==0 or len(cropped[0]) == 0:
# with open(bbox_detect_dir + 'cropped_image/zero_crop.txt','a') as f:
# f.write(img_file+'\n')
try:
cv2.imwrite(bbox_detect_dir + 'cropped_image/' + img_file,cropped)
except:
with open(bbox_detect_dir + 'cropped_image/zero_crop.txt','a') as f:
f.write(img_file+'\n')
cv2.imwrite(bbox_detect_dir + 'cropped_image/' + img_file,np.array([0]))
#img_det = cv2.rectangle(image.copy(),(topRel, leftRel), (bottomRel, rightRel), (0, 255, 0), 3)
#cv2.imwrite(bbox_detect_dir + 'bbox_temp/' + img_file, img_det)
if ii%100==0:
print('processing image',ii,(time()-t1)/100)
t1 = time()
print('all miss data',miss)
def detect_train_images(bbox_detect_dir='data/bbox/',CCT=True,iNat=True):
train_file=pd.read_csv(DATA_DIR+'train_file.csv')
dev_file = pd.read_csv(DATA_DIR + 'dev_file.csv')
data_file=pd.concat([train_file,dev_file])
if CCT:
print('begin to crop CCT data')
data_cct = data_file[data_file['dataset']=='CCT'].reset_index(drop=True)
img_names = data_cct['file_name'].values
ids = data_cct['id'].values
ws = data_cct['width'].values
img2det = {}
with open(bbox_detect_dir+'Detection_Results/CCT_Detection_Results_1.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det[img] = res[:10]
with open(bbox_detect_dir+'Detection_Results/CCT_Detection_Results_2.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det[img] = res[:10]
crop_image(img_names, ws, ids, img2det, bbox_detect_dir)
if iNat:
print('begin to crop iNat data')
data_cct = data_file[data_file['dataset']=='iNat'].reset_index(drop=True)
img_names = data_cct['file_name'].values
ids = data_cct['id'].values
ws = data_cct['width'].values
img2det = {}
with open(bbox_detect_dir+'Detection_Results/iNat_Idaho_Detection_Results.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det[img] = res[:10]
crop_image(img_names, ws, ids, img2det, bbox_detect_dir)
def detect_test_images(bbox_detect_dir='data/bbox/'):
print('detect test image')
data_file=pd.read_csv(DATA_DIR+'raw_data/test_file_orig.csv')
print('test_file',data_file.shape)
img_names = data_file['file_name'].values
ids = data_file['id'].values
ws = data_file['height'].values
img2det = {}
with open(bbox_detect_dir + 'Detection_Results/IDFG_Detection_Results_1.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det[img] = res[:10]
with open(bbox_detect_dir + 'Detection_Results/IDFG_Detection_Results_2.p', 'rb') as data_file:
temp = pickle.load(data_file, encoding='iso-8859-1')
for img, res in zip(temp['images'], temp['detections']):
img2det[img] = res[:10]
crop_image(img_names, ws, ids, img2det, bbox_detect_dir)
def rewrite_cropped_csv():
def check_file(df,prefix_dir='bbox/cropped_image/',name='train'):
df=df.reset_index(drop=True)
print(name,df.shape)
#new_df=pd.DataFrame()
t1=time()
file_names=df['file_name'].values
valid_ind=[]
new_width,new_height=[],[]
for ii, file in enumerate(file_names):
new_path = DATA_DIR + prefix_dir + file
if os.path.exists(new_path):
try:
img = cv2.imread(new_path)
sh=img.shape
except:
continue
new_width.append(sh[1])
new_height.append(sh[0])
valid_ind.append(ii)
if ii%1000==0:
print("datatype: %s, index: %d, time: %f min, data len %d" %(name,ii, (time()-t1)/60),len(valid_ind))
new_df = df.iloc[valid_ind]
new_df['new_width']=new_width
new_df['new_height'] = new_height
print('new_df for:', name, new_df.shape)
new_df.to_csv(DATA_DIR+prefix_dir+name+'.csv',index=False)
return new_df
train = pd.read_csv(DATA_DIR+'train_file.csv')
new_train=check_file(train,name='train_file')
dev = pd.read_csv(DATA_DIR+'dev_file.csv')
new_dev=check_file(dev,name='dev_file')
test = pd.read_csv(DATA_DIR+'test_file.csv')
new_test=check_file(test,name='test_file')
# -
train_file=pd.read_csv(DATA_DIR+'train_file.csv')
dev_file = pd.read_csv(DATA_DIR + 'dev_file.csv')
data_file=pd.concat([train_file,dev_file])
def load_detect_modelb(box_detect_dir='data/bbox/'):
detect_train_images(CCT=True,iNat=True)
#merge_test_bbox()
detect_test_images()
detect_train_images(CCT=True,iNat=True)
#merge_test_bbox()
detect_test_images()
# +
from __future__ import absolute_import
import os
import albumentations as A
import cv2
import numpy as np
import pandas as pd
import torch
from torch.utils.data.dataset import Dataset
from torchvision import transforms
TRAIN_DATASET = {'CCT': 'iWildCam_2019_CCT', 'iNat': 'iWildCam_2019_iNat_Idaho',
'IDFG': 'iWildCam_IDFG'} # _images_small
def image_augment(p=.5, cut_size=8):
imgaugment = A.Compose([
A.HorizontalFlip(p=0.3),
A.GaussNoise(p=.1),
# A.OneOf([
# A.Blur(blur_limit=3, p=.1),
# A.GaussNoise(p=.1),
# ], p=0.2),
A.ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.1, rotate_limit=10, border_mode=cv2.BORDER_CONSTANT,
value=(0, 0, 0), p=.3),
A.RandomBrightnessContrast(p=0.3),
A.HueSaturationValue(
hue_shift_limit=20, sat_shift_limit=20, val_shift_limit=20, p=0.1),
A.Cutout(num_holes=1, max_h_size=cut_size,
max_w_size=cut_size, p=0.3)
], p=p)
return imgaugment
class iWildCam(Dataset):
def __init__(self, params, mode='train'):
self.mode = mode
self.clahe = params['clahe']
self.gray = params['gray']
if 'train' in mode:
clahe_prob = params['clahe_prob']
gray_prob = params['gray_prob']
elif mode == 'infer':
clahe_prob = 1
gray_prob = 1
else:
clahe_prob = int(params['clahe_prob'] >= 1.0)
gray_prob = int(params['gray_prob'] >= 1.0)
if 'train' in mode:
print('use train augmented mode')
self.augment = params['aug_proba'] > 0
self.label_smooth = params['label_smooth']
else:
self.augment = False
self.label_smooth = False
self.one_hot = params['loss'] != 'focal' if mode != 'infer' else False
self.num_classes = params['num_classes']
self.root = params['data_dir']
mean_values = [0.3297, 0.3819, 0.3637]
std_values = [0.1816, 0.1887, 0.1877]
# mean_values = [0.3045, 0.3625, 0.3575]
# std_values = [0.1801, 0.1870, 0.1920]
self.resize = A.Resize(int(params['height'] * 1.1), int(params['width'] * 1.1), interpolation=cv2.INTER_CUBIC,
p=1.0)
self.crop = A.RandomCrop(params['height'], params['width'], p=1.0) if 'train' in mode else A.CenterCrop(
params['height'], params['width'], p=1.0)
if self.clahe:
self.imgclahe = A.CLAHE(
clip_limit=2.0, tile_grid_size=(16, 16), p=clahe_prob)
if self.gray:
self.imggray = A.ToGray(p=gray_prob)
if self.augment:
self.imgaugment = image_augment(
params['aug_proba'], params['cut_size'])
self.norm = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=mean_values,
std=std_values),
])
if mode == 'train':
self.file_dir = self.root + 'train_file.csv' # 'train_file_1.csv'
elif mode == 'dev' or mode == 'val' or mode == 'validation':
self.file_dir = self.root + 'dev_file.csv'
elif mode == 'test' or mode == 'infer':
self.file_dir = self.root + 'test_file.csv'
elif mode == 'train_dev' or mode == 'train_val':
self.file_dir = self.root + 'train_file.csv'
self.file_dir_1 = self.root + 'dev_file.csv'
else:
print('does not exisit!', mode)
data_file = pd.read_csv(self.file_dir)
if mode == 'train':
if not params['CCT']:
data_file = data_file[data_file['dataset'] != 'CCT']
if not params['iNat']:
data_file = data_file[data_file['dataset'] != 'iNat']
if mode == 'train_dev' or mode == 'train_val':
temp = pd.read_csv(self.file_dir_1)
data_file = pd.concat([data_file, temp])
data_file = data_file.mask(
data_file.astype(object).eq('None')).dropna()
data_file['absolute_file_name'] = data_file['file_name'].map(
lambda x: os.path.join(self.root, x))
self.image_files = data_file['absolute_file_name'].values
self.image_ids = data_file['id'].values
print('dataset len:', len(self.image_files))
if mode != 'infer':
self.labels = data_file['category_id'].values
def __getitem__(self, index):
id = self.image_ids[index]
image = cv2.imread(self.image_files[index])
if image is not None:
image = self.resize(image=image)['image']
if self.clahe:
image = self.imgclahe(image=image)['image']
if self.augment:
image = self.imgaugment(image=image)['image']
if self.gray:
image = self.imggray(image=image)['image']
# image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = self.crop(image=image)['image']
image = self.norm(image)
else:
print(self.image_files[index])
if self.mode != 'infer':
label = self.labels[index]
if self.one_hot:
label = np.eye(self.num_classes)[label]
if self.label_smooth > 0:
label = (1 - self.label_smooth) * label + \
self.label_smooth / self.num_classes
else:
label = 0
if self.one_hot:
label = np.eye(self.num_classes)[label]
return (image, label, id)
def __len__(self):
return len(self.image_files)
def _category(self):
category2id = {
'empty': 0,
'deer': 1,
'moose': 2,
'squirrel': 3,
'rodent': 4,
'small_mammal': 5,
'elk': 6,
'pronghorn_antelope': 7,
'rabbit': 8,
'bighorn_sheep': 9,
'fox': 10,
'coyote': 11,
'black_bear': 12,
'raccoon': 13,
'skunk': 14,
'wolf': 15,
'bobcat': 16,
'cat': 17,
'dog': 18,
'opossum': 19,
'bison': 20,
'mountain_goat': 21,
'mountain_lion': 22
}
id2category = [
'empty',
'deer',
'moose',
'squirrel',
'rodent',
'small_mammal',
'elk',
'pronghorn_antelope',
'rabbit',
'bighorn_sheep',
'fox',
'coyote',
'black_bear',
'raccoon',
'skunk',
'wolf',
'bobcat',
'cat',
'dog',
'opossum',
'bison',
'mountain_goat',
'mountain_lion',
]
class data_prefetcher():
def __init__(self, loader, label_type='float'):
self.loader = iter(loader)
self.stream = torch.cuda.Stream()
self.label_type = label_type
self.preload()
def preload(self):
try:
self.next_input, self.next_target, self.next_ids = next(
self.loader)
except StopIteration:
self.next_input = None
self.next_target = None
self.next_ids = None
return
with torch.cuda.stream(self.stream):
self.next_input = self.next_input.cuda(non_blocking=True)
self.next_target = self.next_target.cuda(non_blocking=True)
#self.next_ids = self.next_ids.cuda(non_blocking=True)
self.next_input = self.next_input.float()
if self.label_type == 'float':
self.next_target = self.next_target.float()
else:
self.next_target = self.next_target.long()
def next(self):
torch.cuda.current_stream().wait_stream(self.stream)
input = self.next_input
target = self.next_target
ids = self.next_ids
self.preload()
return input, target, ids
def get_iwildcam_loader(params, mode='train'):
if mode == 'train' or mode == 'train_val' or mode == 'train_dev':
train_data = iWildCam(params, mode=mode)
train_loader = torch.utils.data.DataLoader(
train_data, batch_size=params['batch_size'], shuffle=True,
num_workers=params['threads'], drop_last=True, pin_memory=True)
dev_data = iWildCam(params, mode='dev')
dev_loader = torch.utils.data.DataLoader(
dev_data, batch_size=params['eval_batch_size'], shuffle=False,
num_workers=params['threads'], drop_last=False, pin_memory=True)
return train_loader, dev_loader
elif mode == 'infer':
test_data = iWildCam(params, mode='infer')
test_loader = torch.utils.data.DataLoader(
test_data, batch_size=params['batch_size'], shuffle=False,
num_workers=params['threads'], drop_last=False, pin_memory=True)
return test_loader
else:
return None
# +
def evaluate(model, data_loader, criterion,use_onehot=True):
y_pred, y_true, losses=[],[],[]
with torch.no_grad():
inputs, labels, ids = data_loader.next()
while inputs is not None:
if use_onehot:
targets = np.argmax(labels.cpu().detach().numpy(), axis=1)
else:
targets = labels.cpu().detach().numpy()
y_true.extend(targets)
output = model(inputs)
loss = criterion(output, labels)
y_pred.extend(np.argmax(output.cpu().detach().numpy(), axis=1))
losses.append(loss.cpu().detach().numpy())
inputs, labels, ids = data_loader.next()
acc = metrics.accuracy_score(y_true, y_pred)
f1 = metrics.f1_score(y_true, y_pred, average='macro')
loss_val=np.mean(losses)
return loss_val, acc, f1
def train(params):
if params['init_model'] is not None:
model = torch.load(params['init_model'])
print('load model', params['init_model'])
else:
model = create_model(
params['Net'],
pretrained=params['pretrained'],
num_classes=params['num_classes'],
drop_rate=params['drop_rate'],
global_pool='avg',
bn_tf=False,
bn_momentum=0.99,
bn_eps=1e-3,
checkpoint_path=params['init_model'],
in_chans=3)
optimizer = get_optimizer(params,model)
param_num = sum([p.data.nelement() for p in model.parameters()])
print("Number of model parameters: {} M".format(param_num / 1024 / 1024))
model = model.to(device)
model.train()
if params['lr_schedule']:
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=params['lr_decay_epochs'], gamma=0.2)
if params['loss'] =='ce' or params['loss'] =='cross_entropy':
criterion = cross_entropy().to(device)
label_type = 'float'
elif params['loss'] =='focal':
criterion = focal_loss(gamma=1.0, alpha=1.0).to(device)
label_type='long'
else:
print('no exist loss',params['loss'])
train_data_loader, dev_data_loader = get_iwildcam_loader(params,mode=params['mode'])
train_log=[]
dev_log=[]
best_acc, best_f1, best_epoch=0,0,0
t1 = time()
print('begin to train')
use_onehot=params['loss']!='focal'
for epoch in range(params['epochs']):
train_loader = data_prefetcher(train_data_loader,label_type)
inputs, labels, ids = train_loader.next()
i = 0
while inputs is not None:
mixup_now = np.random.random()<params['aug_proba']
if params['mixup'] and mixup_now:
inputs, labels_a, labels_b, lam = mixup_data(inputs, labels,
params['mixup_alpha'])
optimizer.zero_grad()
output = model(inputs)
if params['mixup'] and mixup_now:
loss = mixup_criterion(criterion, output, labels_a, labels_b, lam)
else:
loss = criterion(output, labels)
loss.backward()
optimizer.step()
if i % params['print_step'] == 0:
preds = np.argmax(output.cpu().detach().numpy(), axis=1)
if use_onehot:
targets = np.argmax(labels.cpu().detach().numpy(), axis=1)
else:
targets = labels.cpu().detach().numpy()
acc = metrics.accuracy_score(targets, preds)
loss_val = loss.cpu().detach().numpy()
f1 = metrics.f1_score(targets,preds,average='macro')
train_log.append([epoch,i, loss_val, acc, f1])
print("epoch: %d, iter: %d, train_loss: %.4f, train_acc: %.4f, train_f1: %.4f, time_cost_per_iter: %.4f s" % (
epoch, i, loss_val, acc, f1,(time() - t1)/params['print_step']))
with open(params['log_dir'] + 'train.tsv', 'a') as f:
f.write('%05d\t%05d\t%f\t%f\t%f\n' % (epoch, i, loss_val, acc, f1))
t1 = time()
if (i+1) % params['save_step'] == 0:
save_model_path= os.path.join(params['save_dir'], 'model_%d_%d.pkl' % (epoch,i))
torch.save(model,save_model_path)
print('save model to',save_model_path)
if (i+1) % params['eval_step'] == 0:
t2=time()
model.eval()
data_loader = data_prefetcher(dev_data_loader,label_type)
loss_val, acc, f1 = evaluate(model, data_loader, criterion,use_onehot)
model.train()
dev_log.append([epoch,i, loss_val, acc, f1])
if f1 > best_f1:
best_acc, best_f1, best_epoch = acc, f1, epoch
print('[Evaluation] -------------------------------')
print("epoch: %d, test acc: %.4f, f1-score: %.4f, loss: %.4f, best-f1-score: %.4f, eval_time: %.4f s" % (
epoch, acc, f1, loss_val, best_f1,time()-t2))
print('[Evaluation] -------------------------------')
with open(params['log_dir'] + 'eval.tsv', 'a') as f:
f.write('%05d\t%05d\t%f\t%f\t%f\n' % (epoch, i, loss_val, acc, f1))
inputs, labels, ids = train_loader.next()
i += 1
if params['lr_schedule']:
scheduler.step(epoch)
return model
def get_params():
params = {
'mode':'train_val',
'data_dir': 'data/bbox/cropped_image/', #['data/bbox/cropped_image/','data/']
'CCT':True,
'iNat':True,
'save_dir': 'final_output/output_0/',
'init_model': None,#'output_1/resnet_101_3_3427.pkl',
'Net': 'tf_efficientnet_b0', # 'resnet','wideresnet','tf_efficientnet_b0'
'pretrained': True,
'drop_rate':0.2,
'batch_size': 32,
'eval_batch_size': 32,
'num_classes': 23,
'epochs': 6,
'print_per_epoch':500,
'eval_per_epoch': 4,
'save_per_epoch': 4,
'loss':'ce',#['ce','focal']
'lr_schedule': True,
'lr': 5e-3,
'weight_decay':1e-6,
'optim': 'adam',
'lr_decay_epochs':[2,4],
'clahe':True,
'clahe_prob': 0.2,
'gray':True,
'gray_prob':0.01,
'aug_proba':0.5,
'cut_size':8,
'label_smooth':0.01,
'mixup':True,
'mixup_alpha':1,
'height':64,#380,#224 resnet, 300
'width':64,
'threads':2,
}
params['log_dir'] = os.path.join(params['save_dir'], 'log/')
if not os.path.exists(params['save_dir']):
os.mkdir(params['save_dir'])
if not os.path.exists(params['log_dir']):
os.mkdir(params['log_dir'])
with open(params['log_dir'] + 'eval.tsv', 'a') as f:
f.write('Epoch\tStep\tLoss\tAccuracy\tF1-Score\n')
with open(params['log_dir'] + 'train.tsv', 'a') as f:
f.write('Epoch\tStep\tLoss\tAccuracy\tF1-Score\n')
root = params['data_dir']
params['train_data_size'] = len(pd.read_csv(root + 'train_file.csv'))
params['dev_data_size'] = len(pd.read_csv(root + 'dev_file.csv'))
params['step_per_epoch'] = params['train_data_size'] // params['batch_size']
params['print_step'] = max(1,params['step_per_epoch']//params['print_per_epoch'])
params['eval_step'] = max(1,params['step_per_epoch']//params['eval_per_epoch'])
params['save_step'] = max(1,params['step_per_epoch']//params['save_per_epoch'])
json.dump(obj=params, fp=open(params['log_dir'] + 'parameters.json', 'w'))
print(params)
return params
def load_params(save_dir):
params_path=save_dir + 'log/parameters.json'
print('load params form',params_path)
params = json.load(fp=open(params_path, 'r'))
ckpts = glob(save_dir+'*.pkl')
if len(ckpts)>0:
ckpts = sorted(ckpts, key=lambda x: eval(x.split('/')[-1].split('.')[0].split('_')[-1]))
params['init_model']=ckpts[-1]
print(params)
return params
# -
params = get_params()
train(params)
# !python3 train_model.py --gpu_ids -1
# !ls
| .ipynb_checkpoints/iwildcam-Copy1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia
# language: julia
# name: julia-1.5
# ---
# # Adaptivity at a singularity
# In {doc}`basics-sing` we found an IVP that appears to blow up in a finite amount of time. Because the solution increases so rapidly as it approaches the blowup, adaptive stepping is required to even get close. In fact it's the failure of adaptivity that is used to get an idea of when the singularity occurs.
using FundamentalsNumericalComputation
f = (u,p,t) -> (t+u)^2
ivp = ODEProblem(f,1,(0.,1.))
t,u = FNC.rk23(ivp,1e-5);
plot(t,u,label="",
xlabel="t",yaxis=(:log10,"u(t)"),title="Finite-time blowup")
| book/ivp/demos/adapt-sing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests
url = 'http://en.most-famous-paintings.com/MostFamousPaintings.nsf/ListOfTop1000MostPopularPainting?OpenForm'
r = requests.get(url)
from bs4 import BeautifulSoup
soup = BeautifulSoup(r.content, 'html.parser')
# +
artist=[]
for l1 in soup.find_all('div', attrs={'class': 'mosaicflow__item'}):
artist.append((l1.text).strip('\n'))
images=[]
for img in soup.findAll('img'):
if img.get('data-original')!= None:
images.append("http://en.most-famous-paintings.com/"+img.get('data-original'))
# +
details=[]
rank = 1
for i in artist:
painter = i[:i.index('\n')]
painting = i[i.index('\n')+1:i.index('(')]
details.append([rank,painter,painting.strip()])
rank += 1
# +
import requests
i=0
for item in details:
r = requests.get(images[i])
i += 1
x = "images/"+str(item[0])+"-"+item[1]+"-"+item[2]+".jpg"
print(x)
with open(x, "wb") as code:
code.write(r.content)
# -
import csv
with open('details.csv', 'w', newline='') as file:
writer = csv.writer(file)
writer.writerow(["SN", "Name", "Painting"])
for i in details:
writer.writerow(i)
| paintings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from rdkit import Chem, DataStructs
from rdkit.Chem import AllChem
from rdkit.Chem.rdMolDescriptors import CalcTPSA
import tensorflow as tf
import sklearn
from sklearn import svm
from sklearn.metrics import mean_squared_error, r2_score
from scipy import stats
import pandas as pd
import seaborn as sns
# +
def train_model(tox_type, regularization):
f = open('tox/'+tox_type+'.smiles','r')
contents = f.readlines()
fps_total=[]
tox_total=[]
num_mols = len(contents)-1
num_ = 0
# 1. Get molecular fingerprints of each molecules
for i in range(num_mols):
smi = contents[i].split()[0]
m = Chem.MolFromSmiles(smi)
if m!=None:
num_ += 1
fp = AllChem.GetMorganFingerprintAsBitVect(m,2)
arr = np.zeros((1,))
DataStructs.ConvertToNumpyArray(fp,arr)
fps_total.append(arr)
tox_total.append(int(contents[i].split()[2]))
# shuffle the sample sets
rand_int = np.random.randint(num_, size=(num_,))
fps_total = np.asarray(fps_total)[rand_int]
_tox_total = np.asarray(tox_total)[rand_int]
# 2. Split the dataset to training set, validation set, and test set
num_total = fps_total.shape[0]
num_train = int(num_total*0.67)
num_validation = int(num_total*0.16)
num_test = int(num_total*0.16)
tox_total = np.zeros((num_total, 2))
for i in range(num_total):
if _tox_total[i] == 0:
tox_total[i][0] = 1
tox_total[i][1] = 0
if _tox_total[i] == 1:
tox_total[i][0] = 0
tox_total[i][1] = 1
fps_train = fps_total[0:num_train]
tox_train = tox_total[0:num_train]
fps_validation = fps_total[num_train:num_validation+num_train]
tox_validation = tox_total[num_train:num_validation+num_train]
fps_test = fps_total[num_validation+num_train:]
tox_test = tox_total[num_validation+num_train:]
#3. Construct a neural network
X = tf.placeholder(tf.float64, shape=[None, 2048])
Y = tf.placeholder(tf.float64, shape=[None, 2])
if regularization == True:
h1 = tf.layers.dense(X, units=512, use_bias=True, activation=tf.nn.sigmoid, kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.1))
h1 = tf.nn.dropout(h1,keep_prob=0.8)
h2 = tf.layers.dense(h1, units=512, use_bias=True, activation=tf.nn.sigmoid, kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.1))
h2 = tf.nn.dropout(h2,keep_prob=0.8)
h3 = tf.layers.dense(h2, units=512, use_bias=True, activation=tf.nn.sigmoid, kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.1))
if regularization == False:
h1 = tf.layers.dense(X, units=512, use_bias=True, activation=tf.nn.sigmoid)
h2 = tf.layers.dense(h1, units=512, use_bias=True, activation=tf.nn.sigmoid)
h3 = tf.layers.dense(h2, units=512, use_bias=True, activation=tf.nn.sigmoid)
Y_pred = tf.layers.dense(h3, units=2, use_bias=True)
Y_pred = tf.layers.flatten(Y_pred)
#4. Set a loss function, in this case we will use cross entropy
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=Y_pred, labels=Y)
loss = tf.reduce_mean(cross_entropy)
#5. Set an optimizer
lr = tf.Variable(0.0, trainable = False) # learning rate
opt = tf.train.AdamOptimizer(lr).minimize(loss)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
#6. Training & validation
batch_size = 150
epoch_size = 20
decay_rate = 0.95
batch_train = int(num_train/batch_size)
batch_validation = int(num_validation/batch_size)
batch_test = int(num_test/batch_size)
init_lr = 0.001
train_loss=[]
valid_loss=[]
for t in range(epoch_size):
train_avg_loss = 0
valid_avg_loss = 0
pred_train = []
sess.run(tf.assign( lr, init_lr*( decay_rate**t ) ))
for i in range(batch_train):
X_batch = fps_train[i*batch_size:(i+1)*batch_size]
Y_batch = tox_train[i*batch_size:(i+1)*batch_size]
_opt, _Y, _loss = sess.run([opt, Y_pred, loss], feed_dict = {X : X_batch, Y : Y_batch})
pred_train.append(_Y)
train_avg_loss += _loss / batch_train
pred_train = np.concatenate(pred_train, axis=0)
pred_validation = []
for i in range(batch_validation):
X_batch = fps_validation[i*batch_size:(i+1)*batch_size]
Y_batch = tox_validation[i*batch_size:(i+1)*batch_size]
_Y, _loss = sess.run([Y_pred, loss], feed_dict = {X : X_batch, Y : Y_batch})
pred_validation.append(_Y)
valid_avg_loss += _loss / batch_validation
pred_validation = np.concatenate(pred_validation, axis=0)
train_loss.append(train_avg_loss)
valid_loss.append(valid_avg_loss)
#print ("Epoch:", t, "train loss:", train_avg_loss, "valid. loss:", valid_avg_loss)
#7. test the model
pred_test = []
for i in range(batch_test):
X_batch = fps_test[i*batch_size:(i+1)*batch_size]
Y_batch = tox_test[i*batch_size:(i+1)*batch_size]
_Y, _loss = sess.run([Y_pred, loss], feed_dict = {X : X_batch, Y : Y_batch})
pred_test.append(_Y)
pred_test = np.concatenate(pred_test, axis=0)
#print (tox_test, pred_test)
tox_final_test=np.zeros(len(pred_test))
pred_final_test=np.zeros(len(pred_test))
for i in range(len(pred_test)):
if tox_test[i][0]==1:
tox_final_test[i]=0
if tox_test[i][0]==0:
tox_final_test[i]=1
for i in range(len(pred_test)):
if pred_test[i][0]>pred_test[i][1]:
pred_final_test[i]=0
if pred_test[i][0]<=pred_test[i][1]:
pred_final_test[i]=1
accuracy = sklearn.metrics.accuracy_score(tox_final_test, pred_final_test)
auc_roc = sklearn.metrics.roc_auc_score(tox_final_test, pred_final_test)
#print (tox_final_test, pred_final_test)
print ("type:", tox_type, "accuracy:",accuracy, "auc-roc:", auc_roc)
return train_loss, valid_loss
def plot(a, b):
train_loss = a[0]
valid_loss = a[1]
train_r_loss = b[0]
valid_r_loss = b[1]
sns.set(color_codes=True)
df = np.vstack((train_loss, train_r_loss, valid_loss, valid_r_loss))
df = np.transpose(df)
index = np.arange(20)
wide_df = pd.DataFrame(df,index, ["train loss","train loss_reg", "validation loss","validation loss_reg" ])
ax = sns.lineplot(data=wide_df)
# -
plot(train_model('nr-ahr', False), train_model('nr-ahr', True))
plot(train_model('nr-er', False), train_model('nr-er', True))
| Assignments/3. MLP/assignment3_soojung.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plot Evaluation Results
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division, absolute_import, print_function
from fastText import train_unsupervised
from fastText import load_model
import os
import gensim, logging
from gensim.scripts.glove2word2vec import glove2word2vec
import errno
from glove import Corpus, Glove
from datetime import datetime
import sys
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
import numpy as np
import pandas as pd
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.rcParams['text.latex.preamble'] = [
'\\usepackage{amsfonts}'
]
# -
# ## Read the results CSV into Pandas Dataframe
df = pd.read_csv('./results/eval/results.csv', sep=',')
df.head()
# ## Select Desirable Fields
df1 = df[['fashionsim_spearman', "analogies_overallacc", "simlex_999_spearman", "wordsim353_spearman",'name']]
# ## Bar-plot of relative accuracies on three evaluation datasets for the Word-Similarity task
# +
N = len(df1["fashionsim_spearman"].values[0:5])
#N = 3
ind = np.arange(N) # the x locations for the groups
width = 0.27 # the width of the bars
fig = plt.figure()
ax = fig.add_subplot(111)
yvals = [4, 9, 2]
yvals = df1["fashionsim_spearman"].values[0:5]
rects1 = ax.bar(ind, yvals, width, color="#599ad3")
zvals = [1,2,3]
zvals = df1["simlex_999_spearman"].values[0:5]
rects2 = ax.bar(ind+width, zvals, width, color='#f9a65a')
kvals = [11,12,13]
kvals = df1["wordsim353_spearman"].values[0:5]
rects3 = ax.bar(ind+width*2, kvals, width, color='#9e66ab')
ax.set_ylabel('Spearman Correlation')
ax.set_xticks(ind+width)
labels = map(lambda x: x.replace("_", "\_"), df1["name"].values)
ax.set_xticklabels(labels, rotation='vertical')
ax.legend( (rects1[0], rects2[0], rects3[0]), ('Fashionsim', 'Simlex999', 'Wordsim353') )
plt.ylim((0,1))
plt.title('Intrinsic evaluation of word vectors on word similarity')
plt.tight_layout()
plt.savefig('intrinsic_comparison_w_pretrained.eps', format='eps', dpi=1000)
plt.show()
# -
# ## Extract Different Algorithms and Context Window Size
# +
ft_skipgram = df.loc[df['algorithm'] == "fasttext"]
ft_skipgram = ft_skipgram.loc[ft_skipgram['data'] == "74million_fashion"]
ft_skipgram = ft_skipgram.loc[ft_skipgram['dimension'] == 300]
ft_skipgram = ft_skipgram.loc[ft_skipgram['model'] == "skipgram"]
ft_cbow = df.loc[df['algorithm'] == "fasttext"]
ft_cbow = ft_cbow.loc[ft_cbow['data'] == "74million_fashion"]
ft_cbow = ft_cbow.loc[ft_cbow['dimension'] == 300]
ft_cbow = ft_cbow.loc[ft_cbow['model'] == "cbow"]
glove = df.loc[df['algorithm'] == "glove"]
glove = glove.loc[glove['dimension'] == 300]
glove = glove.loc[glove['data'] == "74million_fashion"]
word2vec_skipgram = df.loc[df['algorithm'] == "word2vec"]
word2vec_skipgram = word2vec_skipgram.loc[word2vec_skipgram['dimension'] == 300]
word2vec_skipgram = word2vec_skipgram.loc[word2vec_skipgram['model'] == "1"]
word2vec_skipgram = word2vec_skipgram.loc[word2vec_skipgram['data'] == "74million_fashion"]
word2vec_cbow = df.loc[df['algorithm'] == "word2vec"]
word2vec_cbow = word2vec_cbow.loc[word2vec_cbow['dimension'] == 300]
word2vec_cbow = word2vec_cbow.loc[word2vec_cbow['model'] == "0"]
word2vec_cbow = word2vec_cbow.loc[word2vec_cbow['data'] == "74million_fashion"]
# -
# ## Plot with respect to hyperparameters settings and algorithm used
# +
context_size = np.sort(ft_skipgram["context"].values.astype(int))
ft_skipgram_score = []
ft_cbow_score = []
glove_score = []
word2vec_skipgram_score = []
word2vec_cbow_score = []
for val in context_size:
ft_skipgram_temp = ft_skipgram.loc[ft_skipgram['context'] == str(val)]
ft_cbow_temp = ft_cbow.loc[ft_cbow['context'] == str(val)]
glove_temp = glove.loc[glove['context'] == str(val)]
word2vec_skipgram_temp = word2vec_skipgram.loc[word2vec_skipgram['context'] == str(val)]
word2vec_cbow_temp = word2vec_cbow.loc[word2vec_cbow['context'] == str(val)]
ft_skipgram_score.append(ft_skipgram_temp["fashionsim_spearman"].values[0])
ft_cbow_score.append(ft_cbow_temp["fashionsim_spearman"].values[0])
glove_score.append(glove_temp["fashionsim_spearman"].values[0])
word2vec_skipgram_score.append(word2vec_skipgram_temp["fashionsim_spearman"].values[0])
word2vec_cbow_score.append(word2vec_cbow_temp["fashionsim_spearman"].values[0])
la = plt.plot(context_size,ft_skipgram_score,'b-',label='FastText Skip-Gram')
lb = plt.plot(context_size,ft_cbow_score,'r--',label='FastText CBOW')
lc = plt.plot(context_size,glove_score,'g:',label='Glove')
ld = plt.plot(context_size,word2vec_skipgram_score,'k-.',label='Word2vec Skip-Gram')
le = plt.plot(context_size,word2vec_cbow_score,'c_',label='Word2vec CBOW')
ll = plt.legend(loc='upper left')
lx = plt.xlabel('context window size')
ly = plt.ylabel('score')
plt.ylim((0,1))
plt.title('Intrinsic evaluation of fashion word vectors with different window sizes')
plt.savefig('intrinsic_comparison_hyperparams.eps', format='eps', dpi=1000)
plt.show()
# -
ft_skipgram = df.loc[df['algorithm'] == "fasttext"]
ft_skipgram = ft_skipgram.loc[ft_skipgram['data'] == "74million_fashion"]
ft_skipgram = ft_skipgram.loc[ft_skipgram['context'] == "2"]
ft_skipgram = ft_skipgram.loc[ft_skipgram['model'] == "skipgram"]
ft_skipgram.head()
# +
dimension_size = np.sort(ft_skipgram["dimension"].values.astype(int))
ft_skipgram_score = []
#ft_cbow_score = []
#glove_score = []
#word2vec_skipgram_score = []
#word2vec_cbow_score = []
for val in dimension_size:
ft_skipgram_temp = ft_skipgram.loc[ft_skipgram['dimension'] == val]
#ft_cbow_temp = ft_cbow.loc[ft_cbow['context'] == str(val)]
#glove_temp = glove.loc[glove['context'] == str(val)]
#word2vec_skipgram_temp = word2vec_skipgram.loc[word2vec_skipgram['context'] == str(val)]
#word2vec_cbow_temp = word2vec_cbow.loc[word2vec_cbow['context'] == str(val)]
ft_skipgram_score.append(ft_skipgram_temp["fashionsim_spearman"].values[0])
#ft_cbow_score.append(ft_cbow_temp["fashionsim_spearman"].values[0])
#glove_score.append(glove_temp["fashionsim_spearman"].values[0])
#word2vec_skipgram_score.append(word2vec_skipgram_temp["fashionsim_spearman"].values[0])
#word2vec_cbow_score.append(word2vec_cbow_temp["fashionsim_spearman"].values[0])
la = plt.plot(dimension_size,ft_skipgram_score,'b-',label='FastText Skip-Gram')
#lb = plt.plot(context_size,ft_cbow_score,'r--',label='FastText CBOW')
#lc = plt.plot(context_size,glove_score,'g:',label='Glove')
#ld = plt.plot(context_size,word2vec_skipgram_score,'k-.',label='Word2vec Skip-Gram')
#le = plt.plot(context_size,word2vec_cbow_score,'c_',label='Word2vec CBOW')
ll = plt.legend(loc='upper left')
lx = plt.xlabel('context window size')
ly = plt.ylabel('score')
plt.ylim((0,1))
plt.title('Intrinsic evaluation of fashion word vectors with different vector dimension')
plt.savefig('intrinsic_comparison_dimensions.eps', format='eps', dpi=1000)
plt.show()
# -
| wordvecs/plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
import seaborn as sn
import matplotlib.pyplot as plt
df = pd.read_csv('HDFS1_data.csv')
df.head()
X = df.drop(['row', 'minutes', 'label'], axis = 1)
y = df['label']
# +
from sklearn.preprocessing import MinMaxScaler
min_max_scaler = MinMaxScaler()
X[["sequence_in_block"]] = min_max_scaler.fit_transform(X[["sequence_in_block"]])
(X["sequence_in_block"].min(), X["sequence_in_block"].max())
# -
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
print('X_train', X_train.shape)
print('X_test ', X_test.shape)
print('y_train', y_train.shape)
print('y_test ', y_test.shape)
logistic_regression = LogisticRegression(max_iter=4000)
logistic_regression.fit(X_train, y_train)
y_pred = logistic_regression.predict(X_test)
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sn.heatmap(confusion_matrix, annot=True)
confusion_matrix
print('Accuracy: ', metrics.accuracy_score(y_test, y_pred))
print("Precision:", metrics.precision_score(y_test, y_pred))
print("Recall:", metrics.recall_score(y_test, y_pred))
y_pred_proba = logistic_regression.predict_proba(X_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
# ## Random Forest Regressor
regressor = RandomForestClassifier(n_estimators=100, random_state=0)
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)
confusion_matrix = pd.crosstab(y_test, y_pred, rownames=['Actual'], colnames=['Predicted'])
sn.heatmap(confusion_matrix, annot=True)
confusion_matrix
print('Accuracy: ', metrics.accuracy_score(y_test, y_pred))
print("Precision:", metrics.precision_score(y_test, y_pred))
print("Recall:", metrics.recall_score(y_test, y_pred))
y_pred_proba = regressor.predict_proba(X_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.plot(fpr,tpr,label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.show()
| notebooks/logistic-regression-and-random-forest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# name: python2
# ---
# +
#Dict is
#item product in dict. 'product' : 'internet'
#'count' : salesnum
#'tags' : tags to add
#Discount. add senior and under 18 prices.
#Read arcive folders
#When an item is sold it is saved in archive folder
#There is also a temp folder it is saved to - this is
#used for customer recepits.
#select from archive folder the sales items you want
#to display on customer.
# +
import os
import dominate
from dominate.tags import *
import getpass
import json
# -
usrnam = getpass.getuser()
usrnam
rarch = os.listdir('/home/wcmckee/sellcoffee/archive/')
rarch
nevdict = dict()
# +
#nevdict.update({
# +
#json.loads(rejd)
# -
for rac in rarch:
rejs = open('/home/wcmckee/sellcoffee/archive/' + rac, 'r')
#print rac
recd = rejs.read()
jrecd = json.loads(recd)
print jrecd['product']
print jrecd['amount']
print jrecd['date']
print jrecd['time']
rejs.close()
jrecd.keys()
jrecd.values()
# +
doc = dominate.document(title='Sales Archive')
with doc.head:
link(rel='stylesheet', href='style.css')
script(type ='text/javascript', src='script.js')
#str(str2)
with div():
attr(cls='header')
h1('Sales Archive')
p(img('/imgs/getsdrawn-bw.png', src='/imgs/getsdrawn-bw.png'))
#p(img('imgs/15/01/02/ReptileLover82-reference.png', src= 'imgs/15/01/02/ReptileLover82-reference.png'))
#h1('Date ', str(artes.datetime))
#p(panz)
#p(bodycom)
with doc:
with div(id='body').add(ol()):
for rac in rarch:
rejs = open('/home/wcmckee/sellcoffee/archive/' + rac, 'r')
#print rac
recd = rejs.read()
jrecd = json.loads(recd)
p(jrecd['product'])
p(jrecd['amount'])
p(jrecd['date'])
p(jrecd['time'])
rejs.close()
#for flc in fulcom:
#p(h1(flc.title))
#p(img(imlocdir, src= imlocdir))
#p(img(flc.url, src = flc.url))
#p(str(flc.author))
#res = requests.get(flc.url, stream=True)
#with open(str(flc.author) + '-' + str(artes.date()) + '-reference.png', 'wb') as outfil:
# shutil.copyfileobj(res.raw, outfil)
# del res
#for flcz in flc.comments:
#p(flcz.body)
#for rdz in reliz:
#h1(rdz.title)
#a(rdz.url)
#p(img(rdz, src='%s' % rdz))
#print rdz
#p(img(rdz, src = rdz))
#p(rdz)
#print rdz.url
#if '.jpg' in rdz.url:
# img(rdz.urlz)
#else:
# a(rdz.urlz)
#h1(str(rdz.author))
#li(img(i.lower(), src='%s' % i))
with div():
attr(cls='body')
p('lcacoffee is open source')
#a('https://github.com/wcmckee/lcacoffee')
a('https://reddit.com/r/redditgetsdrawn')
#print doc
# -
docre = doc.render()
#s = docre.decode('ascii', 'ignore')
yourstring = docre.encode('ascii', 'ignore').decode('ascii')
indfil = ('/home/' + usrnam + '/sellcoffee/index.html')
mkind = open(indfil, 'w')
mkind.write(yourstring)
mkind.close()
| sellarchive.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # Automated Machine Learning
# _**Energy Demand Forecasting**_
#
# ## Contents
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. [Data](#Data)
# 1. [Train](#Train)
# ## Introduction
# In this example, we show how AutoML can be used to forecast a single time-series in the energy demand application area.
#
# Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.
#
# Notebook synopsis:
# 1. Creating an Experiment in an existing Workspace
# 2. Configuration and local run of AutoML for a simple time-series model
# 3. View engineered features and prediction results
# 4. Configuration and local run of AutoML for a time-series model with lag and rolling window features
# 5. Estimate feature importance
# ## Setup
#
# +
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from matplotlib import pyplot as plt
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
# -
# As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
# +
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-energydemandforecasting'
# project folder
project_folder = './sample_projects/automl-local-energydemandforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
# -
# ## Data
# We will use energy consumption data from New York City for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. Pandas CSV reader is used to read the file into memory. Special attention is given to the "timeStamp" column in the data since it contains text which should be parsed as datetime-type objects.
data = pd.read_csv("nyc_energy.csv", parse_dates=['timeStamp'])
data.head()
# We must now define the schema of this dataset. Every time-series must have a time column and a target. The target quantity is what will be eventually forecasted by a trained model. In this case, the target is the "demand" column. The other columns, "temp" and "precip," are implicitly designated as features.
# Dataset schema
time_column_name = 'timeStamp'
target_column_name = 'demand'
# ### Forecast Horizon
#
# In addition to the data schema, we must also specify the forecast horizon. A forecast horizon is a time span into the future (or just beyond the latest date in the training data) where forecasts of the target quantity are needed. Choosing a forecast horizon is application specific, but a rule-of-thumb is that **the horizon should be the time-frame where you need actionable decisions based on the forecast.** The horizon usually has a strong relationship with the frequency of the time-series data, that is, the sampling interval of the target quantity and the features. For instance, the NYC energy demand data has an hourly frequency. A decision that requires a demand forecast to the hour is unlikely to be made weeks or months in advance, particularly if we expect weather to be a strong determinant of demand. We may have fairly accurate meteorological forecasts of the hourly temperature and precipitation on a the time-scale of a day or two, however.
#
# Given the above discussion, we generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so the user should consider carefully how they set this value. If a long horizon forecast really is necessary, it may be good practice to aggregate the series to a coarser time scale.
#
#
# Forecast horizons in AutoML are given as integer multiples of the time-series frequency. In this example, we set the horizon to 48 hours.
max_horizon = 48
# ### Split the data into train and test sets
# We now split the data into a train and a test set so that we may evaluate model performance. We note that the tail of the dataset contains a large number of NA values in the target column, so we designate the test set as the 48 hour window ending on the latest date of known energy demand.
# +
# Find time point to split on
latest_known_time = data[~pd.isnull(data[target_column_name])][time_column_name].max()
split_time = latest_known_time - pd.Timedelta(hours=max_horizon)
# Split into train/test sets
X_train = data[data[time_column_name] <= split_time]
X_test = data[(data[time_column_name] > split_time) & (data[time_column_name] <= latest_known_time)]
# Move the target values into their own arrays
y_train = X_train.pop(target_column_name).values
y_test = X_test.pop(target_column_name).values
# -
# ## Train
#
# We now instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. For forecasting tasks, we must provide extra configuration related to the time-series data schema and forecasting context. Here, only the name of the time column and the maximum forecast horizon are needed. Other settings are described below:
#
# |Property|Description|
# |-|-|
# |**task**|forecasting|
# |**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>
# |**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|
# |**iteration_timeout_minutes**|Time limit in minutes for each iteration.|
# |**X**|(sparse) array-like, shape = [n_samples, n_features]|
# |**y**|(sparse) array-like, shape = [n_samples, ], targets values.|
# |**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.|
# |**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.
# +
time_series_settings = {
'time_column_name': time_column_name,
'max_horizon': max_horizon
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_nyc_energy_errors.log',
primary_metric='normalized_root_mean_squared_error',
iterations=10,
iteration_timeout_minutes=5,
X=X_train,
y=y_train,
n_cross_validations=3,
path=project_folder,
verbosity = logging.INFO,
**time_series_settings)
# -
# Submitting the configuration will start a new run in this experiment. For local runs, the execution is synchronous. Depending on the data and number of iterations, this can run for a while. Parameters controlling concurrency may speed up the process, depending on your hardware.
#
# You will see the currently running iterations printing to the console.
local_run = experiment.submit(automl_config, show_output=True)
local_run
# ### Retrieve the Best Model
# Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration.
best_run, fitted_model = local_run.get_output()
fitted_model.steps
# ### View the engineered names for featurized data
# Below we display the engineered feature names generated for the featurized data using the time-series featurization.
fitted_model.named_steps['timeseriestransformer'].get_engineered_feature_names()
# ### Test the Best Fitted Model
#
# For forecasting, we will use the `forecast` function instead of the `predict` function. There are two reasons for this.
#
# We need to pass the recent values of the target variable `y`, whereas the scikit-compatible `predict` function only takes the non-target variables `X`. In our case, the test data immediately follows the training data, and we fill the `y` variable with `NaN`. The `NaN` serves as a question mark for the forecaster to fill with the actuals. Using the forecast function will produce forecasts using the shortest possible forecast horizon. The last time at which a definite (non-NaN) value is seen is the _forecast origin_ - the last time when the value of the target is known.
#
# Using the `predict` method would result in getting predictions for EVERY horizon the forecaster can predict at. This is useful when training and evaluating the performance of the forecaster at various horizons, but the level of detail is excessive for normal use.
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period
# (which is the same time as the end of the last training period).
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_fcst, X_trans = fitted_model.forecast(X_test, y_query)
# +
# limit the evaluation to data where y_test has actuals
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_fcst, X_trans, X_test, y_test)
df_all.head()
# -
# Looking at `X_trans` is also useful to see what featurization happened to the data.
X_trans
# ### Calculate accuracy metrics
# Finally, we calculate some accuracy metrics for the forecast and plot the predictions vs. the actuals over the time range in the test set.
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
# +
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
# %matplotlib inline
pred, = plt.plot(df_all[time_column_name], df_all['predicted'], color='b')
actual, = plt.plot(df_all[time_column_name], df_all[target_column_name], color='g')
plt.xticks(fontsize=8)
plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.title('Prediction vs. Actual Time-Series')
plt.show()
# -
# The distribution looks a little heavy tailed: we underestimate the excursions of the extremes. A normal-quantile transform of the target might help, but let's first try using some past data with the lags and rolling window transforms.
#
# ### Using lags and rolling window features
# We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, grain and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation.
#
# Now that we configured target lags, that is the previous values of the target variables, and the prediction is no longer horizon-less. We therefore must still specify the `max_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.
# +
time_series_settings_with_lags = {
'time_column_name': time_column_name,
'max_horizon': max_horizon,
'target_lags': 12,
'target_rolling_window_size': 4
}
automl_config_lags = AutoMLConfig(task='forecasting',
debug_log='automl_nyc_energy_errors.log',
primary_metric='normalized_root_mean_squared_error',
blacklist_models=['ElasticNet'],
iterations=10,
iteration_timeout_minutes=10,
X=X_train,
y=y_train,
n_cross_validations=3,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings_with_lags)
# -
# We now start a new local run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations.
local_run_lags = experiment.submit(automl_config_lags, show_output=True)
best_run_lags, fitted_model_lags = local_run_lags.get_output()
y_fcst_lags, X_trans_lags = fitted_model_lags.forecast(X_test, y_query)
df_lags = align_outputs(y_fcst_lags, X_trans_lags, X_test, y_test)
df_lags.head()
X_trans_lags
# +
print("Forecasting model with lags")
rmse = np.sqrt(mean_squared_error(df_lags[target_column_name], df_lags['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_lags[target_column_name], df_lags['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_lags[target_column_name], df_lags['predicted']))
# Plot outputs
# %matplotlib inline
pred, = plt.plot(df_lags[time_column_name], df_lags['predicted'], color='b')
actual, = plt.plot(df_lags[time_column_name], df_lags[target_column_name], color='g')
plt.xticks(fontsize=8)
plt.legend((pred, actual), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
# -
# ### What features matter for the forecast?
# +
from azureml.train.automl.automlexplainer import explain_model
# feature names are everything in the transformed data except the target
features = X_trans_lags.columns[:-1]
expl = explain_model(fitted_model_lags, X_train.copy(), X_test.copy(), features=features, best_run=best_run_lags, y_train=y_train)
# unpack the tuple
shap_values, expected_values, feat_overall_imp, feat_names, per_class_summary, per_class_imp = expl
best_run_lags
# -
# Please go to the Azure Portal's best run to see the top features chart.
#
# The informative features make all sorts of intuitive sense. Temperature is a strong driver of heating and cooling demand in NYC. Apart from that, the daily life cycle, expressed by `hour`, and the weekly cycle, expressed by `wday` drives people's energy use habits.
| how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="BRXjsYRqjn4i" colab_type="text"
# # Necessary Code Blocks to Run
# + id="LIb1KQlZfshT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1594836251568, "user_tz": 420, "elapsed": 947, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="28cc3179-7ef9-44e8-bdd5-2791c964bebb"
import pandas as pd
from google.colab import drive
drive.mount('/content/gdrive')
df = pd.read_csv('/content/gdrive/My Drive/covid19deaths.csv')
# + id="IpxDlViHprPR" colab_type="code" colab={}
# + id="1tLs6-VjhK8O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1594836252097, "user_tz": 420, "elapsed": 1456, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="55968356-719c-4e78-8bd9-cf666676e6cf"
df1 = df.groupby('Province_State').sum()
df1
# + id="9M6cEftu00Qn" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836252098, "user_tz": 420, "elapsed": 1453, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1 = df1.reset_index()
# + id="J5A2NS4W0Wsh" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836252099, "user_tz": 420, "elapsed": 1449, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1 = df1.loc[~df1['Province_State'].isin(['American Samoa','Northern Mariana Islands', 'Grand Princess', 'Diamond Princess', 'Guam', 'District of Columbia', 'Puerto Rico', 'Virgin Islands'])]
# + id="oLj7cuBQ0Rx7" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836252100, "user_tz": 420, "elapsed": 1445, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
area = ['S', 'W', 'SW', 'S', 'W', 'W', 'NE', 'MA', 'S', 'S', 'W', 'W', 'MW', 'MW', 'MW', 'MW', 'S', 'S', 'NE', 'MA', 'NE', 'MW', 'MW', 'S', 'MW', 'W', 'MW', 'W', 'NE', 'MA', 'SW', 'MA', 'S', 'MW', 'MW', 'SW', 'W', 'MA', 'NE', 'S', 'MW', 'S', 'SW', 'W', 'NE', 'S', 'W', 'S', 'MW', 'W']
df1['Area'] = area
# + id="uI2jn1RVxNj9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 406} executionInfo={"status": "ok", "timestamp": 1594836252100, "user_tz": 420, "elapsed": 1425, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="955a7434-fc21-4c00-e385-264240e899d6"
df1 = df1.melt(['UID','code3','FIPS','Lat','Long_','Population', 'Province_State', 'Area'])
df1
# + id="v6zsVApkpRS1" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836252101, "user_tz": 420, "elapsed": 1422, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1.Province_State.unique()
states = list(df1.Province_State.unique())
# + id="zGurDxffpi0C" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836252102, "user_tz": 420, "elapsed": 1416, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
state = 'California'
tempdf = df1.loc[df1['Province_State'] == state]
# + id="11xJ7Mw50Hzt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1594836252102, "user_tz": 420, "elapsed": 1398, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="043f30c2-7c81-467b-cf8d-c197a1ab53be"
len('Area')
# + id="RhPNonRIrqSE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 406} executionInfo={"status": "ok", "timestamp": 1594836252297, "user_tz": 420, "elapsed": 1569, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="d7d7f232-efea-4800-cf40-653763f87596"
df1 = df1.join(pd.get_dummies(df1['Area'], prefix = 'Area1'))
df1
# + id="257stitXwHuI" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836252300, "user_tz": 420, "elapsed": 1567, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1['variable'] = pd.to_datetime(df1['variable'])
# + id="KEwRUIcKwQ7l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} executionInfo={"status": "ok", "timestamp": 1594836252301, "user_tz": 420, "elapsed": 1547, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="4e0032d5-7c8f-48f7-97ee-8eadfee048ae"
otherdate = df1.iloc[0]['variable']
df1['number_of_days'] = (df1['variable'] - otherdate).dt.days
df1['number_of_days']
# + id="M_M70zsc49yc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 406} executionInfo={"status": "ok", "timestamp": 1594836252302, "user_tz": 420, "elapsed": 1525, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="18eea94c-d55a-447b-ab5e-0b0734ea751e"
df1
# + id="WIhK-Al81akn" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836252302, "user_tz": 420, "elapsed": 1514, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1.Province_State.unique()
states = list(df1.Province_State.unique())
# + id="GKhXKJ5UCGrT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1594836252927, "user_tz": 420, "elapsed": 2113, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="92467c2d-ea4a-4a38-b296-eb37145e6e40"
for state in states:
print(state)
statedf = df1.loc[df1['Province_State'] == state]
firstdeath = statedf.loc[statedf['value'] != 0].iloc[0]['variable']
print(firstdeath)
df1.loc[df1['Province_State'] == state, 'firstdeath'] = firstdeath
# + id="9vqjITgwEIoT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 406} executionInfo={"status": "ok", "timestamp": 1594836252929, "user_tz": 420, "elapsed": 2088, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="e8fdc5df-2026-4e56-f0b8-d2c2f699009e"
df1
# + id="wzUtOof4CHlW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 230} executionInfo={"status": "ok", "timestamp": 1594836253095, "user_tz": 420, "elapsed": 2234, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="b6bd0486-16fa-4f10-ae90-368277798138"
df1['dayssincefirst'] = (df1['variable'] - df1['firstdeath']).dt.days
df1['dayssincefirst']
# + id="30DxIa7jEyn1" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836253096, "user_tz": 420, "elapsed": 2229, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
#if (df1['variable'] - df1['firstdeath']) >= 0:
# + id="83tRN0E71aw_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1594836253302, "user_tz": 420, "elapsed": 2414, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="571abc2c-0149-4d4a-9e99-2cbfa271da7e"
for state in states:
print(state)
statedf = df1.loc[df1['Province_State'] == state]
print(statedf.loc[statedf['value'] != 0].iloc[0])
# + [markdown] id="24bGS8UqkRxp" colab_type="text"
# # Scaling
# + id="YxpGQk0LmsS9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 0} executionInfo={"status": "ok", "timestamp": 1594836253859, "user_tz": 420, "elapsed": 2915, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="4a627592-cfef-4bc3-ab06-be4bc9d8a758"
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled = scaler.fit_transform(df1[['Population',
'value',]])
df1_scaled = pd.DataFrame(data=scaled, columns=['Population_Scaled',
'Deaths_Scaled',])
df1 = df1.join(df1_scaled)
df1.head()
# + [markdown] id="v4RwCJwkkWq2" colab_type="text"
# # Linear Regression
#
# + id="RkN-QXhavPIi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 0} executionInfo={"status": "ok", "timestamp": 1594836254054, "user_tz": 420, "elapsed": 3093, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="ab368439-939f-4f7f-a948-028d6c6de656"
from sklearn.linear_model import LinearRegression
LR=LinearRegression()
X = df1[['Population_Scaled']]
y = df1['Deaths_Scaled']
df1
# + id="UQmkwiDJrdCb" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1594836254055, "user_tz": 420, "elapsed": 3088, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
# + id="7-0f5mTT7cHU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 0} executionInfo={"status": "ok", "timestamp": 1594836254322, "user_tz": 420, "elapsed": 3346, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="5e685ccb-d291-41ae-a9b2-360adf3261c0"
df1
# + [markdown] id="UM5XpmbZHyQV" colab_type="text"
# # Download CSV
# + [markdown] id="PbO2GvZbPRqv" colab_type="text"
# # FIND OUT HOW TO TRANSFER DATA SETS BETWEEN COLAB NOTEBOOKS
# + id="RV35FUevG0ch" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 310} executionInfo={"status": "error", "timestamp": 1594836254491, "user_tz": 420, "elapsed": 3503, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="8ba4579d-99bf-4198-a99c-4ff62f25ee19"
from google.colab import files
files.download('Coronavirus Data Set.ipynb.csv')
# + id="DTDmGefRSJuA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} executionInfo={"status": "ok", "timestamp": 1594836284972, "user_tz": 420, "elapsed": 534, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}} outputId="115f288d-9ccc-4516-a570-31cd89e61938"
df1.plot(kind= 'scatter', x = 'variable', y = 'value')
# + id="rxnmMhbMS8Of" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254323, "user_tz": 420, "elapsed": 3315, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
AL = df1.loc[df1.Province_State == 'Alabama']
AK = df1.loc[df1.Province_State == 'Alaska']
AZ = df1.loc[df1.Province_State == 'Arizona']
AR = df1.loc[df1.Province_State == 'Arkansas']
CA = df1.loc[df1.Province_State == 'California']
CO = df1.loc[df1.Province_State == 'Colorado']
CT = df1.loc[df1.Province_State == 'Connecticut']
DE = df1.loc[df1.Province_State == 'Delaware']
FL = df1.loc[df1.Province_State == 'Floride']
GA = df1.loc[df1.Province_State == 'Georgia']
HI = df1.loc[df1.Province_State == 'Hawaii']
ID = df1.loc[df1.Province_State == 'Idaho']
IL = df1.loc[df1.Province_State == 'Illinois']
IN = df1.loc[df1.Province_State == 'Indiana']
IA = df1.loc[df1.Province_State == 'Iowa']
KS = df1.loc[df1.Province_State == 'Kansas']
KY = df1.loc[df1.Province_State == 'Kentucy']
LA = df1.loc[df1.Province_State == 'Louisiana']
ME = df1.loc[df1.Province_State == 'Maine']
MD = df1.loc[df1.Province_State == 'Maryland']
MA = df1.loc[df1.Province_State == 'Massachusetts']
MI = df1.loc[df1.Province_State == 'Michigan']
MN = df1.loc[df1.Province_State == 'Minnesota']
MS = df1.loc[df1.Province_State == 'Mississippi']
MO = df1.loc[df1.Province_State == 'Missouri']
MT = df1.loc[df1.Province_State == 'Montana']
NE = df1.loc[df1.Province_State == 'Nebraska']
NV = df1.loc[df1.Province_State == 'Nevada']
NH = df1.loc[df1.Province_State == 'New Hampshire']
NJ = df1.loc[df1.Province_State == 'New Jersey']
NM = df1.loc[df1.Province_State == 'New Mexico']
NY = df1.loc[df1.Province_State == 'New York']
NC = df1.loc[df1.Province_State == 'North Carolina']
ND = df1.loc[df1.Province_State == 'North Dakota']
OH = df1.loc[df1.Province_State == 'Ohio']
OK = df1.loc[df1.Province_State == 'Oklahoma']
OR = df1.loc[df1.Province_State == 'Oregon']
PA = df1.loc[df1.Province_State == 'Pennsylvania']
RI = df1.loc[df1.Province_State == 'Rhode Island']
SC = df1.loc[df1.Province_State == 'South Carolina']
SD = df1.loc[df1.Province_State == 'South Dakota']
TN = df1.loc[df1.Province_State == 'Tennessee']
TX = df1.loc[df1.Province_State == 'Texas']
UT = df1.loc[df1.Province_State == 'Utah']
VT = df1.loc[df1.Province_State == 'Vermont']
VA = df1.loc[df1.Province_State == 'Virginia']
WA = df1.loc[df1.Province_State == 'Washington']
WV = df1.loc[df1.Province_State == 'West Virginia']
WI = df1.loc[df1.Province_State == 'Wisconsin']
WY = df1.loc[df1.Province_State == 'Wyoming']
DC = df1.loc[df1.Province_State == 'District of Columbia']
PR = df1.loc[df1.Province_State == 'Puerto Rico']
AS = df1.loc[df1.Province_State == 'American Samoa']
GU = df1.loc[df1.Province_State == 'Guam']
NI = df1.loc[df1.Province_State == 'Northern Mariana Islands']
VI = df1.loc[df1.Province_State == 'Virgin Islands']
DP = df1.loc[df1.Province_State == 'Diamond Princess']
GP = df1.loc[df1.Province_State == 'Grand Princess']
# + id="qqb4v6-9eH5e" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254323, "user_tz": 420, "elapsed": 3311, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
NY.plot(kind = 'scatter', x = 'variable' , y = 'value')
# + id="A5nV2rIkhA7U" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254324, "user_tz": 420, "elapsed": 3308, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1.Province_State.unique()
states = list(df1.Province_State.unique())
# + id="qFGxGwaehcB8" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254324, "user_tz": 420, "elapsed": 3304, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
state = 'New York'
tempdf = df1.loc[df1['Province_State'] == state]
tempdf.plot(kind = 'scatter', x = 'value', y = 'Population')
# + id="1_Rsu8kHiHDG" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254325, "user_tz": 420, "elapsed": 3302, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
for state in states:
print(state)
df1.loc[df1['Province_State'] == state]
tempdf['pct'] = tempdf['value'].pct_change()
#print(tempdf['pct'].mean())
# + id="SqoBVtdXn_Gg" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254325, "user_tz": 420, "elapsed": 3299, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
import numpy as np
tempdf.loc[~np.isfinite(tempdf['pct'])]
# + id="U46Gb7Op1zRP" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254326, "user_tz": 420, "elapsed": 3297, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1['variable'] = pd.to_datetime(df1['variable'])
df1
# + id="1s2Cz6qCSc0Y" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254326, "user_tz": 420, "elapsed": 3293, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
list(df1)
dates = [x for x in list(df1) if '/' in x]
pd.melt(df, id_vars=['Province_State'], value_vars=['7/1/20'])
# + id="RZYeccG2ZGir" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254327, "user_tz": 420, "elapsed": 3276, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
totaldeaths = df1['7/1/20'].sum()
avgdeaths = totaldeaths/58
df1['aboveavgdeaths'] = 0
df1.loc[df1['7/1/20'] > avgdeaths, 'aboveavgdeaths'] = 1
df1
df1.plot(kind = 'scatter', x = (df1['aboveavgdeaths'] == 1), y = 'Population')
# + [markdown] id="v3Mr7F8BuxNo" colab_type="text"
# # Classification
# + id="GGSEy4m9yaWh" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254328, "user_tz": 420, "elapsed": 3270, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
from sklearn.neighbors import KNeighborsClassifier
KNN = KNeighborsClassifier(n_neighbors=5)
X = df1[['Population']]
y = df1['value']
KNN.fit(X, y)
df1['prediction'] = KNN.predict(X)
df1[['value','prediction']].sample(20)
# + id="Uh33go7evML6" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254329, "user_tz": 420, "elapsed": 3266, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print(accuracy_score(df1['value'],df1['prediction']))
print(recall_score(df1['value'],df1['prediction'],average='macro'))
print(precision_score(df1['value'],df1['prediction'],average='macro'))
print(f1_score(df1['value'],df1['prediction'],average='macro'))
# + id="HoOk9JbG8eFf" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254330, "user_tz": 420, "elapsed": 3261, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
# + id="UGA114iw6SeH" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254330, "user_tz": 420, "elapsed": 3257, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
df1 = df.groupby('Province_State').sum()
print(df1)
area = ['S', 'W', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE', 'MW', 'NE', 'SW', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE','MW', 'NE', 'NE', 'SW', 'WC', 'MW', 'NE']
df1['Area'] = area
# + [markdown] id="Yf9B9WrrBk4c" colab_type="text"
# # Time Series Split
# + [markdown] id="ukhaH1nI7dbk" colab_type="text"
# # Cross-Validation
# + id="mNrlEcQuBixw" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254486, "user_tz": 420, "elapsed": 3410, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
import numpy as np
from sklearn.model_selection import TimeSeriesSplit
from sklearn.model_selection import cross_val_score
X = df1[['']]
y = df1['value']
cv = TimeSeriesSplit(n_split = len(df)/58)
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
y = np.array([1, 2, 3, 4, 5, 6])
cv = TimeSeriesSplit(n_split = len(df)/58)
print(tscv)
TimeSeriesSplit(max_train_size=None, n_splits=5)
for train_index, test_index in tscv.split(X):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# + [markdown] id="WZoRZlCI7vR-" colab_type="text"
# # Sequential Feature Selection
# + id="aKD-wANHmYY4" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254487, "user_tz": 420, "elapsed": 3408, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib.pyplot as plt
# + id="NvNeLhfi7c8j" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254488, "user_tz": 420, "elapsed": 3404, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
KNN = KNeighborsClassifier(n_neighbors=5)
X = df_iris[['SepalLengthCm',
'SepalWidthCm',
'PetalLengthCm',
'PetalWidthCm']]
y = df_iris['Species']
# + id="IJ9GX3jbHldb" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254489, "user_tz": 420, "elapsed": 3401, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
sfs = SFS(KNN,
k_features=4,
scoring='f1_macro',
cv=5)
sfs = sfs.fit(X, y)
sfs.get_metric_dict()
# + id="Q2N91mNYHlVQ" colab_type="code" colab={} executionInfo={"status": "aborted", "timestamp": 1594836254490, "user_tz": 420, "elapsed": 3400, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjtGkaB4-PJhy0Toe8mvX5KhFs9RNNYVua5aDhI=s64", "userId": "00075263077672689698"}}
fig = plot_sfs(sfs.get_metric_dict(), kind='std_err')
plt.title('Sequential Forward Selection (w. StdErr)')
plt.grid()
plt.show()
| Coronavirus Data Set.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="F67xqO4tayCD" colab_type="code" colab={}
# Decision Tree Classification
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# + id="D5pi0nqwbIhN" colab_type="code" colab={}
# Importing the dataset
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
# + id="MfgYOjfbbL84" colab_type="code" colab={}
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# + id="CPKM2UFBbOqI" colab_type="code" colab={}
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# + id="0QdDCWn9bWBX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="7718e02c-e8b3-430b-f8d7-670630adfa1a"
# Fitting Decision Tree Classification to the Training set
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
# + id="nHfJOBgRbYsc" colab_type="code" colab={}
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# + id="vO5z__U0bcB0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="575dc883-6081-4342-d8e9-d78502427c3a"
print(X_train)
# + id="IyQyZk8UbcxL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="11a60c68-954b-426f-ad39-66434fa6404d"
print(X_test)
# + id="4cquS1QVbc7q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="4dbb9869-07e5-44d7-d806-873b45739666"
print(y_train)
# + id="rQJQLzMqbdHt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="c69db927-5886-4c20-ebba-2a3afb2df00d"
print(y_test)
# + id="x8axNcdsbdTT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="ac61fb8d-e09c-423b-c7d8-4b325c548ab8"
print(y_pred)
# + id="1edBjlD3bewA" colab_type="code" colab={}
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# + id="6Ys2klwZbyNC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="528c0441-9df0-4fb4-aea3-bfa99e5e7f8b"
print(cm)
# + id="wmLwv1JEb0Cb" colab_type="code" colab={}
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)
# + id="ZuRWilcRb2KV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c34bdc4d-13d4-47f3-9561-87e9e1e3ef21"
print(accuracy)
# + id="MFO3PJUib2ux" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 349} outputId="a52a1ce2-ca74-4cb3-c013-95d68090e02d"
# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Decision Tree Classification (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# + id="rMJycmgGcKU7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 349} outputId="45c01ed8-081d-446f-b1c5-c24c255b4e82"
# Visualising the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Decision Tree Classification (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
| Decision_Tree_Classification/Decision_Tree_Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
f = open('puzzle_12_input.txt', 'r')
pairs = []
for i in f:
pair = i.split('-')
pairs.append((pair[0], re.sub('\n', '', pair[1])))
pairs.remove(('ZP', 'start'))
pairs.remove(('wx', 'start'))
pairs
system = ['yc']
found = True
while found == True:
to_add = []
found = False
for i in system:
for pair in pairs:
if i in pair and (pair[0] not in system or pair[1] not in system):
to_add.append(pair[0])
to_add.append(pair[1])
to_add = list(set(to_add))
if to_add != []:
system = system + to_add
found = True
system = list(set(system))
system
pairs_new = [pair for pair in pairs if (pair[0] in system or pair[1] in system)]
pairs_new
class cave:
def __init__(self, name):
self.name = name
self.connections = []
if name.isupper():
self.size = 'uppercase'
else:
self.size = 'lowercase'
x = cave('kc')
y = cave('qy')
x.connections.append(y)
y.connections.append(5)
z = cave('ZP')
x.connections[0].connections
caves = []
for i in pairs_new:
x = cave(i[0])
if x in caves:
caves[caves.index(x)].connections.append(cave(i[1]))
else:
caves.append(x)
x = cave(i[1])
if x in caves:
caves[caves.index(x)].connections.append(cave(i[1]))
else:
caves.append(x)
#caves.append(cave(i[0], i[1]))
#caves.append(cave(i[1], i[0]))
for i in caves:
print(len(i.connections))
def get_connections(x):
while path[-1]
path = ['start','yc']
s = ''
for i in path:
s = s + i + '->'
print(s)
'XQ' in ('nv', 'XQ')
| code_12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 3. Fitting "in-the-wild" images
#
# #### Prerequisites
#
# - Access to a combined 3D facial shape model of identity and expression
# - Access to an "in-the-wild" texture model
# - Access to an "in-the-wild" image with landmarks to fit. The landmarks should be given as input and should follow the iBUG 68 mark-up. It is highly recommended that these landmarks are "3D-Aware-2D" (3DA-2D) landmarks, as e.g. described in the paper (Zafeiriou et al., "The 3D menpo facial landmark tracking challenge", ICCV-W 2017).
#
# The first two of these can be generated by following the two previous notebooks in this folder:
#
# - `1. Building an "in-the-wild" texture model.ipynb`
# - `2. Creating an expressive 3DMM.ipynb`
#
# Running these notebooks to completion will lead to the following two files being generated in the `DATA_PATH` folder:
#
# - `itw_texture_model.pkl`
# - `id_exp_shape_model.pkl`
#
# This script shows how to load these directly and use them in fitting.
#
# You could of course understand the required formats by studying the aformentinoed scripts, and instead load your own shape and texture models instead.
# +
from pathlib import Path
import numpy as np
import menpo.io as mio
from menpo.base import LazyList
from menpo.shape import PointCloud
from menpo.visualize import print_progress
from menpo3d.morphablemodel import ColouredMorphableModel
from itwmm import (
initialize_camera_from_params, initialize_camera,
fit_image, instance_for_params,
render_initialization, render_iteration,
)
# Replace DATA_PATH with the path to your data. It should have files:
# itw_texture_model.pkl
# id_exp_shape_model.pkl
# As generated from Notebooks 1. and 2.)
DATA_PATH = Path('~/Dropbox/itwmm_src_data/').expanduser()
# +
def prepare_image_and_return_transforms(diagonal, feature_f, image):
# this variation is needed if we need to know how the imput image
# is transformed
img, t = image.crop_to_landmarks_proportion(0.4, return_transform=True)
img, scale = img.rescale_landmarks_to_diagonal_range(diagonal, return_transform=True)
return {
'image': feature_f(img),
't': t.translation_component,
'scale': scale.scale[0]
}
def load_id_exp_shape_model(path):
sm_dict = mio.import_pickle(path)
shape_model = sm_dict['shape_model']
lms = sm_dict['lms']
id_ind = sm_dict['id_ind']
exp_ind = sm_dict['exp_ind']
return shape_model, lms, id_ind, exp_ind
def load_itw_texture_model(path):
tm_dict = mio.import_pickle(path)
texture_model = tm_dict['texture_model']
diagonal_range = tm_dict['diagonal_range']
feature_function = tm_dict['feature_function']
return texture_model, diagonal_range, feature_function
# -
# ## Prepare data and model
# +
# LOAD SHAPE MODEL
# note that id_ind and exp_ind are two index mappings into the components of
# this special combined shape model. The first records the index position of
# components that are related to identitiy, the second an index of the (remaining)
# components which are related to shape.
shape_model, lms, id_ind, exp_ind = load_id_exp_shape_model(DATA_PATH / 'id_exp_shape_model.pkl')
# record the number of ID / EXP params
n_p, n_q = id_ind.shape[0], exp_ind.shape[0]
# LOAD ITW TEXTURE MODEL
# Note we have to know the diagonal setting and feature used in the texture model.
texture_model, diagonal_range, feature_function = load_itw_texture_model(DATA_PATH / 'itw_texture_model.pkl')
# construct our Morphable Model that we can use in the fitting approaches below
mm = ColouredMorphableModel(shape_model, texture_model, lms,
holistic_features=feature_function,
diagonal=diagonal_range)
# +
# load some images and prepare them for fitting.
# Note that we have to rescale the images/extract the feature we used for the model
# ourselves. Unlike previous menpo fitting routines, fit_image is a simpler implementation.
# it requires us to explicitly do more before we call fit_image, but it is much simpler to
# follow what is being done in the code.
frame = mio.import_image('itw_image.jpg')
transform_info = prepare_image_and_return_transforms(diagonal_range, feature_function, frame)
image = transform_info.pop('image')
# -
# ## Initialize parameters
# +
# %matplotlib inline
# initialize the shape weights to zero (mean)
p = np.zeros(n_p)
q = np.zeros(n_q)
# initialize the camera with a large focal length (orthogathic)
camera = initialize_camera_from_params(image, mm, id_ind, exp_ind, p, q, focal_length=99999999)
c = camera.as_vector()
# Check the initialization looks sensible
# Note that most methods in ITWMM expect video inputs. We just need to wrap
# Single-frame parameters in a single list to re-use them for images.
render_initialization([image], mm, id_ind, exp_ind, camera, p, [q], [c], 0).view()
# -
# ## ITW Image fitting
# Actually run the optimisation.
# Return is a list of parameters recovered per-iteration per-frame.
params = fit_image(image, mm, id_ind, exp_ind, camera,
p, q, c,
lm_group='PTS', n_iters=10,
c_f=3.,
c_id=1.,
c_exp=3.,
c_l=1.,
n_samples=1000, compute_costs=True)
# ## Inspect results
# now we render the fitting.
iter_no = -1
frame.view()
render_iteration(mm, id_ind, exp_ind, image.shape,
camera, params, 0, iter_no).view(new_figure=True)
| notebooks/3. Fitting in-the-wild images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/moonryul/pix2vertex.pytorch/blob/master/pix2vertexTest.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="9WL_2fs-KzGv"
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# + id="a48sv0x6MY2H"
# + colab={"base_uri": "https://localhost:8080/"} id="_o_fsVFpNAef" outputId="7195b609-d0eb-44a1-b3cd-83802b80d898"
from google.colab import drive
drive.mount('/content/gdrive')
# + id="PfC7mhJUNLy8"
# !ln -s /root /content
# + colab={"base_uri": "https://localhost:8080/"} id="JBfAI3L3Nquq" outputId="a69a91a6-c9d4-4322-8506-638581b9f56d"
# !pwd
# + id="TPJaJuPUNYkr"
import sys
sys.path.append('/content/gdrive/MyDrive/ColabNotebooks/fastaiPart2/course-v3/nbs/dl2')
# + colab={"base_uri": "https://localhost:8080/"} id="TlUcYws2MuVZ" outputId="0ed29e41-dece-4924-eb4b-0ca8e8a87403"
# !git clone https://github.com/eladrich/pix2vertex.pytorch.git
# + id="H2YvT1rcMw3l"
# %cd pix2vertex.pytorch
# !python setup.py install
# + colab={"base_uri": "https://localhost:8080/", "height": 718} id="EQg0e70xOTSc" outputId="014b2c4a-10fc-4e9e-cfa6-3ee5e2a1d004"
import pix2vertex as p2v
from imageio import imread
image = imread('examples/sample.jpg')
result, crop = p2v.reconstruct(image)
# Interactive visualization in a notebook
p2v.vis_depth_interactive(result['Z_surface'])
# Static visualization using matplotlib
p2v.vis_depth_matplotlib(crop, result['Z_surface'])
# Export to STL
p2v.save2stl(result['Z_surface'], 'res.stl')
| pix2vertexTest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# -
# <img src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png" style="width: 90px; float: right;">
#
# # NVTabular demo on Rossmann data - Feature Engineering & Preprocessing
#
# ## Overview
#
# NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems. It provides a high level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS cuDF library.
# ### Learning objectives
#
# This notebook demonstrates the steps for carrying out data preprocessing, transformation and loading with NVTabular on the Kaggle Rossmann [dataset](https://www.kaggle.com/c/rossmann-store-sales/overview). Rossmann operates over 3,000 drug stores in 7 European countries. Historical sales data for 1,115 Rossmann stores are provided. The task is to forecast the "Sales" column for the test set.
#
# The following example will illustrate how to use NVTabular to preprocess and feature engineer the data for futher training deep learning models. We provide notebooks for training neural networks in [TensorFlow](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/03a-Training-with-TF.ipynb), [PyTorch](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/03b-Training-with-PyTorch.ipynb) and [FastAI](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/04-Training-with-FastAI.ipynb). We'll use a [dataset built by FastAI](https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson6-rossmann.ipynb) for solving the [Kaggle Rossmann Store Sales competition](https://www.kaggle.com/c/rossmann-store-sales). Some cuDF preprocessing is required to build the appropriate feature set, so make sure to run [01-Download-Convert.ipynb](https://github.com/NVIDIA/NVTabular/blob/main/examples/99-applying-to-other-tabular-data-problems-rossmann/01-Download-Convert.ipynb) first before going through this notebook.
# +
import os
import json
import nvtabular as nvt
from nvtabular import ops
# -
# ## Preparing our dataset
# Let's start by defining some of the a priori information about our data, including its schema (what columns to use and what sorts of variables they represent), as well as the location of the files corresponding to some particular sampling from this schema. Note that throughout, I'll use UPPERCASE variables to represent this sort of a priori information that you might usually encode using commandline arguments or config files.<br><br>
# We use the data schema to define our pipeline.
# +
DATA_DIR = os.environ.get(
"OUTPUT_DATA_DIR", os.path.expanduser("~/nvt-examples/data/")
)
CATEGORICAL_COLUMNS = [
"Store",
"DayOfWeek",
"Year",
"Month",
"Day",
"StateHoliday",
"CompetitionMonthsOpen",
"Promo2Weeks",
"StoreType",
"Assortment",
"PromoInterval",
"CompetitionOpenSinceYear",
"Promo2SinceYear",
"State",
"Week",
"Events",
"Promo_fw",
"Promo_bw",
"StateHoliday_fw",
"StateHoliday_bw",
"SchoolHoliday_fw",
"SchoolHoliday_bw",
]
CONTINUOUS_COLUMNS = [
"CompetitionDistance",
"Max_TemperatureC",
"Mean_TemperatureC",
"Min_TemperatureC",
"Max_Humidity",
"Mean_Humidity",
"Min_Humidity",
"Max_Wind_SpeedKm_h",
"Mean_Wind_SpeedKm_h",
"CloudCover",
"trend",
"trend_DE",
"AfterStateHoliday",
"BeforeStateHoliday",
"Promo",
"SchoolHoliday",
]
LABEL_COLUMNS = ["Sales"]
COLUMNS = CATEGORICAL_COLUMNS + CONTINUOUS_COLUMNS + LABEL_COLUMNS
# -
# What files are available to train on in our data directory?
# ! ls $DATA_DIR
# `train.csv` and `valid.csv` seem like good candidates, let's use those.
TRAIN_PATH = os.path.join(DATA_DIR, "train.csv")
VALID_PATH = os.path.join(DATA_DIR, "valid.csv")
# ### Defining our Data Pipeline
# The first step is to define the feature engineering and preprocessing pipeline.<br><br>
# NVTabular has already implemented multiple calculations, called `ops`. An `op` can be applied to a `ColumnGroup` from an overloaded `>>` operator, which in turn returns a new `ColumnGroup`. A `ColumnGroup` is a list of column names as text.<br><br>
# **Example:**<br>
# features = [*\<column name\>*, ...] >> *\<op1\>* >> *\<op2\>* >> ...
#
# This may sounds more complicated as it is. Let's define our first pipeline for the Rossmann dataset. <br><br>We need to categorify the categorical input features. This converts the categorical values of a feature into continuous integers (0, ..., |C|), which is required by an embedding layer of a neural network.
# * Initial `ColumnGroup` is `CATEGORICAL_COLUMNS`
# * `Op` is `Categorify`
cat_features = CATEGORICAL_COLUMNS >> ops.Categorify()
# We can visualize the calculation with `graphviz`.
(cat_features).graph
# Our next step is to process the continuous columns. We want to fill in missing values and normalize the continuous features with mean=0 and std=1.
# * Initial `ColumnGroup` is `CONTINUOUS_COLUMNS`
# * First `Op` is `FillMissing`
# * Second `Op` is `Normalize`
cont_features = CONTINUOUS_COLUMNS >> ops.FillMissing() >> ops.Normalize()
(cont_features).graph
# Finally, we need to apply the LogOp to the label/target column.
label_feature = LABEL_COLUMNS >> ops.LogOp()
(label_feature).graph
# We can visualize the full workflow by concatenating the output `ColumnGroups`.
(cat_features + cont_features + label_feature).graph
# ### Workflow
# A NVTabular `workflow` orchastrates the pipelines. We initalize the NVTabular `workflow` with the output `ColumnGroups`.
proc = nvt.Workflow(cat_features + cont_features + label_feature)
# ### Datasets
# In general, the `Op`s in our `Workflow` will require measurements of statistical properties of our data in order to be leveraged. For example, the `Normalize` op requires measurements of the dataset mean and standard deviation, and the `Categorify` op requires an accounting of all the categories a particular feature can manifest. However, we frequently need to measure these properties across datasets which are too large to fit into GPU memory (or CPU memory for that matter) at once.
#
# NVTabular solves this by providing the `Dataset` class, which breaks a set of parquet or csv files into into a collection of `cudf.DataFrame` chunks that can fit in device memory. Under the hood, the data decomposition corresponds to the construction of a [dask_cudf.DataFrame](https://docs.rapids.ai/api/cudf/stable/dask-cudf.html) object. By representing our dataset as a lazily-evaluated [Dask](https://dask.org/) collection, we can handle the calculation of complex global statistics (and later, can also iterate over the partitions while feeding data into a neural network).
#
#
train_dataset = nvt.Dataset(TRAIN_PATH)
valid_dataset = nvt.Dataset(VALID_PATH)
# +
PREPROCESS_DIR = os.path.join(DATA_DIR, "ross_pre/")
PREPROCESS_DIR_TRAIN = os.path.join(PREPROCESS_DIR, "train")
PREPROCESS_DIR_VALID = os.path.join(PREPROCESS_DIR, "valid")
# ! rm -rf $PREPROCESS_DIR # remove previous trials
# ! mkdir -p $PREPROCESS_DIR_TRAIN
# ! mkdir -p $PREPROCESS_DIR_VALID
# -
# Now that we have our datasets, we'll apply our `Workflow` to them and save the results out to parquet files for fast reading at train time. Similar to the `scikit learn` API, we collect the statistics of our train dataset with `.fit`.
proc.fit(train_dataset)
# We apply and transform our dataset with `.transform` and presist it to disk with `.to_parquet`. We want to shuffle our train dataset before storing to disk to provide more randomness during our deep learning training.
proc.transform(train_dataset).to_parquet(PREPROCESS_DIR_TRAIN, shuffle=nvt.io.Shuffle.PER_WORKER)
proc.transform(valid_dataset).to_parquet(PREPROCESS_DIR_VALID, shuffle=None)
# ### Finalize embedding tables
#
# In the next steps, we will train a deep learning model with either TensorFlow, PyTorch or FastAI. Our training pipeline requires information about the data schema to define the neural network architecture. In addition, we store the embedding tables structure.
EMBEDDING_TABLE_SHAPES = nvt.ops.get_embedding_sizes(proc)
EMBEDDING_TABLE_SHAPES
json.dump(
{
"EMBEDDING_TABLE_SHAPES": EMBEDDING_TABLE_SHAPES,
"CATEGORICAL_COLUMNS": CATEGORICAL_COLUMNS,
"CONTINUOUS_COLUMNS": CONTINUOUS_COLUMNS,
"LABEL_COLUMNS": LABEL_COLUMNS,
},
open(PREPROCESS_DIR + "/stats.json", "w"),
)
# !ls $PREPROCESS_DIR
| examples/tabular-data-rossmann/02-ETL-with-NVTabular.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import os
from datetime import date
cols = ["Resource","CustomAttr15","Summary","LastOccurrence","CustomAttr11"] #Range [A-E]
single = os.getcwd() + "\\" + "single.csv"
print(single)
df = pd.read_csv(single)
Rn = df[cols]
#print(Rn)
count = 1
TS = lambda x : '2G' if ('2G SITE DOWN' in x) \
else ('3G' if ('3G SITE DOWN' in x) \
else ('4G' if ('4G SITE DOWN' in x) \
else ('MF' if ('MAIN' in x) \
else ('DC' if ('VOLTAGE' in x) \
else ('TM' if ('TEMPERATURE' in x) \
else ('SM' if ('SMOKE' in x) \
else ('GN' if ('GEN' in x) \
else ('GN' if ('GENSET' in x) \
else ('TH' if ('THEFT' in x) \
else ('2_CELL' if ('2G CELL DOWN' in x) \
else ('3_CELL' if ('3G CELL DOWN' in x) \
else ('4_CELL' if ('4G CELL DOWN' in x) \
else "NA"))))))))))))
def loop(Rn):
for rw in range(Rn.shape[0]):
rwvalue1 = Rn.iloc[rw , 1] #on Column 1
rwvalue2 = Rn.iloc[rw , 2] ##on Column 2
rwvalue3 = Rn.iloc[rw , 3] ##on Column 3
count = count + 1
print('Row Number', count , ':', rwvalue1, '>', rwvalue2, '>', rwvalue3)
for column in Rn[['CustomAttr15']]:
colseries = Rn[column]
print(colseries.values) #Transposed value of Column
for (colname, coldata) in RnA2E.iteritems():
print(colname) #Column Name
print(coldata.values) #Transposed value of Column
print('end')
class omdf:
def __init__(self,dff):
self.df = dff
self.arr = self.df.to_numpy()
def df_add_col_instr(self):
self.df['cat'] = self.df.apply(lambda row: TS(row.Summary), axis = 1)
return self.df.to_dict()
def df_add_col_dic(self,colname,newcol,dic):
self.df[newcol] = self.df['scode'].map(dic)
return self.df.to_dict()
def df_add_col_slice_str(self,newcolname):
self.df[newcolname] = self.df.apply(lambda x : x.CustomAttr15[0:5], axis = 1)
return self.df.to_dict()
def df_rmv_column(self,lis):
ndf = self.df[lis]
return ndf.to_dict()
def df_countif(self,column_name,newcolumn_name):
code = pd.Series(self.df[column_name])
lst = code.values.tolist()
dic = {}
for i in lst:
dic[i] = lst.count(i)
df_occ = pd.DataFrame(dic.items(),columns=[column_name, newcolumn_name])
mdf = self.df.merge(df_occ, on=column_name)
return mdf
def df_instr(self,colname,srcstr):
self.df[srcstr] = list(map(lambda x: x.count(srcstr), self.df[colname]))
return self.df
def df_vlookup(self,df2,common_colname):
mdf = self.df.merge(df2, on=common_colname)
return mdf
#Rn['ABC'] = list(map(lambda x: x.count("CXTKN"), Rn['CustomAttr15']))
#print(Rn)
#ndf = countif('CustomAttr15','CountOf')
x = omdf(Rn)
#ndf = x.df_instr('CustomAttr15','DHSDR')
#print(ndf)
# -
L = lambda df,colname,dic : df[colname].map(dic)
dic = {'ERI-2G SITE DOWN':'2G','ERI-3G SITE DOWN':'3G'}
#dv = [value for key, value in dic.items() if '2G SITE DOWN' in key]
print(L(Rn,'Summary',dic))
| Z_ALL_FILE/Jy1/omdf_loop-checkpoint.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.1
# language: julia
# name: julia-1.7
# ---
import Pkg
Pkg.activate("..")
using PyCall
py"""
import pickle
import networkx as nx
G = pickle.load(open("/home/will/research/xaqlab/ergm/G_proof_v6.p", "rb"))
"""
py"""
import numpy as np
import numpy.linalg as npl
import numpy.random as npr
layer_names = list(set([G.nodes[n]['layer'] for n in G.nodes]))
layers = {}
for layer in layer_names:
layers[layer] = G.subgraph([n for n in G.nodes if G.nodes[n]['layer'] == layer])
def p(n):
return np.array([G.nodes[n]['soma_' + i] for i in 'xyz'])
def subsample(layer, k, s):
Hs = []
layer_nodes = [n for n in G.nodes if G.nodes[n]['layer'] == layer]
m = len(layer_nodes)
ps = np.vstack([p(n) for n in layer_nodes])
for _ in range(s):
j0 = npr.randint(m)
ds = npl.norm(ps - ps[j0, :], axis=1)
js = np.argpartition(ds, k)[:k]
ns = [layer_nodes[j] for j in js]
Hs.append(G.subgraph(ns))
return Hs
"""
# +
using ergm.spaces
function nx_to_digraph(nx)
A = py"nx.adjacency_matrix($nx).todense()"
DiGraph(A .> 0)
end
n = 30
layer = "L2/L3"
Gs = map(nx_to_digraph, py"subsample($layer, $n, 10000)");
# +
function desp(G)
A = G.adjacency
n = size(A, 1)
B = A .* (A ^ 2)'
[sum(B .== m) for m ∈ 1:n-2]
end
function sample_er(n, p)
DiGraph(convert(Matrix{Bool}, rand(Float64, (n, n)) .< p))
end
# -
Gs_ER = [sample_er(30, 0.4) for _ ∈ 1:10000]
s_er = hcat([desp(G) for G ∈ Gs_ER]...);
using Plots
histogram(s_er[5, :])
| nbs/.ipynb_checkpoints/gwesp-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from arcgis.features import FeatureLayer
lyr_url = r'https://arcgis.sdi.abudhabi.ae/arcgis/rest/services/Pub/Generic_Search/MapServer/0'
lyr = FeatureLayer(lyr_url)
lyr
df = lyr.query().sdf
df.spatial.
| notebooks/00_demo_layer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Actions
# - generate heatmap for each of the DF's - White, Red, All together
# - generate SVM models for each of the DF's predicting quality - White, Red, All
# - normalize / regularization of the data
# - PCA or refinement
#
# https://scikit-learn.org/stable/modules/svm.html#multi-class-classification
#
# Generate the scores to: https://scikit-learn.org/stable/modules/svm.html#scores-probabilities
#
#
# Hyperparameter Tuning with Scikit
# https://subscription.packtpub.com/book/data/9781789615326/10/ch10lvl1sec75/hyperparameter-tuning-with-grid-search
#
# https://scikit-learn.org/stable/auto_examples/svm/plot_svm_nonlinear.html#sphx-glr-auto-examples-svm-plot-svm-nonlinear-py
#
# ## Try different kerne's
# https://scikit-learn.org/stable/auto_examples/svm/plot_iris_svc.html#sphx-glr-auto-examples-svm-plot-iris-svc-py
#
# +
import os
import numpy as np
import pandas as pd
def load_data(data_path_red, data_path_white):
column_header = ["fixed_acidity", "volatile_acidity", "citric_acid", \
"residual_sugar", "chlorides", "free_sulfur_dioxide", \
"total_sulfur_dioxide", "density", "pH", "sulphates", "alcohol", "quality"]
df_red = pd.read_csv(data_path_red, sep = ';', names = column_header, header=0)
df_red['color'] = 1
df_white = pd.read_csv(data_path_white, sep = ';', names = column_header, header=0)
df_white['color'] = 0
total_rows = len(df_white) + len(df_red)
df_all = df_red.append(df_white)
assert(len(df_all) == total_rows)
return df_red, df_white, df_all
# -
from helpers import *
# +
#load the wine
data_path_red = '../.data/wine/winequality-red.csv'
data_path_white = '../.data/wine/winequality-white.csv'
df_red, df_white, df_all = load_data(data_path_red, data_path_white)
# -
# retrieve JUST the feature columns - excet the "color"
features_all = df_all.iloc[:, 0:11] #syntax is 'UP to but NOT (<) when a range.'
labels_all = df_all.iloc[:, 11]
# +
from sklearn import svm
X = features_all
y = labels_all
# TODO: https://scikit-learn.org/stable/modules/svm.html#multi-class-classification
clf = svm.SVC(decision_function_shape='ovo')
#clf = svm.SVC()
model = clf.fit(X, y)
# -
model.predict(X)
df_red.iloc[ : , 10:12]
# +
import matplotlib.pyplot as plt
import seaborn as sns
figsize = 18, 18
plt.figure(figsize = figsize)
rv = df_all.corr()
sns.heatmap(rv, annot = True, annot_kws = {"size": 14}, label = "foo", cmap = "YlGnBu") # Blues, Blues_r YlGnBu
plt.show()
# -
df_red.corr()
rv
# ## SVM using RRBF
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler, Normalizer
from sklearn.datasets import load_iris
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.model_selection import GridSearchCV
from helpers import *
# +
#load the wine
data_path_red = '../.data/wine/winequality-red.csv'
data_path_white = '../.data/wine/winequality-white.csv'
df_red, df_white, df_all = load_data(data_path_red, data_path_white)
X, y = get_features_and_labels(df_all)
X = X.to_numpy()
y = y.to_numpy()
# -
X_2d = X[:, :2]
X_2d = X_2d[y > 0]
y_2d = y[y > 0]
y_2d -= 1
scaler = StandardScaler() # Normalizer()
X = scaler.fit_transform(X)
X_2d = scaler.fit_transform(X_2d)
# +
#https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html
C_range = np.logspace(-2, 10, 13)
gamma_range = np.logspace(-9, 3, 13)
param_grid = dict(gamma=gamma_range, C=C_range)
cv = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42)
grid = GridSearchCV(SVC(), param_grid=param_grid, cv=cv)
grid.fit(X, y)
print("The best parameters are %s with a score of %0.2f"
% (grid.best_params_, grid.best_score_))
# -
| final/notebooks/scratch/svm-scratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 第7章
from sklearn.datasets import load_breast_cancer
bc = load_breast_cancer()
y = bc.target
X = pd.DataFrame.from_records(data=bc.data, columns=bc.feature_names)
# 转化为df
df = X
df['target'] = y
# # 第10章--可用第11章的方法会更好吧
from sklearn.datasets import load_iris
data = load_iris()
# 转化为df
X = pd.DataFrame.from_records(data=data.data, columns=data.feature_names)
df = X
df['target'] = data.target
df.shape
# # 第11章
# +
np.random.seed(42)
x = np.array([i * np.pi / 180 for i in range(-180, 60, 5)])
# 加入了分布正态的噪音
y = np.cos(x) + np.random.normal(0, 0.15, len(x))
data = pd.DataFrame(np.column_stack([x, y]), columns=['x', 'y'])
pow_max = 13
# 构造不同幂的x
for i in range(2, pow_max):
colname = 'x_%d' % i
data[colname] = data['x']**i
# +
# iris 是3分类数据
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
X, y = load_iris(return_X_y=True)
# 使用liblinear和ovr多分类策略
clf = LogisticRegression(solver='liblinear',
multi_class='auto',
random_state=42).fit(X, y)
# -
# # 第13章
# +
import numpy as np
np.random.seed(42)
# 构造500个数据点
n = 500
X = np.array([i / n for i in range(n + 1)])
# 构造一个包含方差为 0.01 的噪音数据
y = np.array([i + np.random.normal(scale=0.1) for i in X])
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=42)
# -
from sklearn.datasets import make_moons
n = 200
X, y = make_moons(n, noise=0.2, random_state=42)
# # 第14章
# +
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=42)
# -
y.shape
# # end
# 正态分布
import numpy as np
import pandas as pd
np.random.normal(loc=0,scale=1.0,size=100)
| ch03-data_preparation/用到的数据集.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# # Training Large Recommender Models with NVIDIA HugeCTR and Vertex AI
#
# This notebook demonstrates how to use Vertex AI Training to operationalize training and hyperparameter tuning of large scale deep learning models developed with NVIDIA HugeCTR framework. The notebook compiles prescriptive guidance for the following tasks:
#
# - Building a custom Vertex training container derived from NVIDIA NGC Merlin Training image
# - Configuring, submitting and monitoring a Vertex custom training job
# - Configuring, submitting and monitoring a Vertex hyperparameter tuning job
# - Retrieving and analyzing results of a hyperparameter tuning job
#
# The deep learning model used in this sample is [DeepFM](https://arxiv.org/abs/1703.04247) - a Factorization-Machine based Neural Network for CTR Prediction. The HugeCTR implementation of this model used in this notebook has been configured for the [Criteo dataset](https://ailab.criteo.com/download-criteo-1tb-click-logs-dataset/).
#
# ## 1. HugeCTR Overview
#
# [HugeCTR](https://github.com/NVIDIA-Merlin/HugeCTR) is NVIDIA's GPU-accelerated, highly scalable recommender framework. We highly encourage reviewing the [HugeCTR User Guide](https://github.com/NVIDIA-Merlin/HugeCTR/blob/master/docs/hugectr_user_guide.md) before proceeding with this notebook.
#
# NVIDIA HugeCTR facilitates highly scalable implementations of leading deep learning recommender models including Google's [Wide and Deep](https://arxiv.org/abs/1606.07792), Facebook's [DLRM](https://ai.facebook.com/blog/dlrm-an-advanced-open-source-deep-learning-recommendation-model/), and the [DeepFM](https://arxiv.org/abs/1703.04247) model used in this notebook.
#
# A unique feature of HugeCTR is support for model-parallel embedding tables. Applying model-parallelizm for embedding tables enables massive scalability. Industrial grade deep learning recommendation systems most often employ very large embedding tables. User and item embedding tables - cornerstones of any recommender - can easily exceed tens of millions of rows. Without model-parallelizm it would be impossible to fit embedding tables this large in the memory of a single device - especially when using large embeddng vectors. In HugeCTR, embedding tables can span device memory across multiple GPUs on a single node or even across GPUs in a large distributed cluster.
#
# HugeCTR supports multiple model-parallelizm configurations for embedding tables. Refer to [the HugeCTR API reference](https://github.com/NVIDIA-Merlin/HugeCTR/blob/master/docs/hugectr_layer_book.md#embedding-types-detail) for detailed descriptions. In the DeepFM implementation used in this notebook, we utilize the `LocalizedSlotSparseEmbeddingHash` embedding type. In this embedding type, an embedding table is segmented into multiple slots or feature fields. Each slot stores embeddings for a single categorical feature. A given slot is allocated to a single GPU and it does not span multiple GPUs. However; in a multi GPU environment different slots can be allocated to different GPUs.
#
# The following diagram demonstrates an example configuration on a single node with multiple GPU - a hardware topology used by Vertex jobs in this notebook.
#
#
# <img src="./images/deepfm.png" alt="Model parallel embeddings" style="width: 70%;"/>
#
#
# The Criteo dataset has 24 categorical features so there are 24 slots in the embedding table. Cardinalities of categorical variables vary from tens of milions to low teens so the dimensions of slots vary accordingly. Each slot in the embedding table utilizes an embedding vector of the same size. Note that the distribution of slots across GPUs is handled by HugeCTR; you don't have to explicitly pin a slot to a GPU.
#
# Dense layers of the DeepFM models are replicated on all GPUs using a canonical data-parallel pattern.
#
# A choice of an optimizer is critical when training large deep learning recommender systems. Different optimizers may result in significantly different convergence rates impacting both time (cost) to train and a final model performance. Since large recommender systems are often retrained on frequent basis minimizing time to train is one of the key design objectives for a training workflow. In this notebook we use the Adam optimizer that has been proved to work well with many deep learning recommeder system architectures.
#
# You can find the code that implements the DeepFM model in the [src/training/hugectr/trainer/model.py](src/training/hugectr/trainer/model.py) file.
#
#
# ## 2. Model Training Overview
#
# The training workflow has been optimized for Vertex AI Training.
# - Google Cloud Storage (GCS) and Vertex Training GCS Fuse are used for accessing training and validation data
# - A single node, multiple GPUs worker pool is used for Vertex Training jobs.
# - Training code has been instrumented to support hyperparameter tuning using Vertex Training Hyperparameter Tuning Job.
#
# You can find the code that implements the training workflow in the [src/training/hugectr/trainer/task.py](src/training/hugectr/trainer/task.py) file.
#
# ### Training data access
#
# Large deep learning recommender systems are trained on massive datasets often hundreds of terabytes in size. Maintaining high-throughput when streaming training data to GPU workers is of critical importance. HugeCTR features a highly efficient multi-threaded data reader that parallelizes data reading and model computations. The reader accesses training data through a file system interface. The reader cannot directly access object storage systems like Google Cloud Storage, which is a canonical storage system for large scale training and validation datasets in Vertex AI Training. To expose Google Cloud Storage through a file system interface, the notebook uses an integrated feature of Vertex AI - Google Cloud Storage FUSE. Vertex AI GCS FUSE provides a high performance file system interface layer to GCS that is self-tuning and requires minimal configuration. The following diagram depicts the training data access configuration:
#
# <img src="./images/gcsfuse.png" alt="GCS Fuse" style="width:70%"/>
#
#
# ### Vertex AI Training worker pool configuration
#
# HugeCTR supports both single node, multiple GPU configurations and multiple node, multiple GPU distributed cluster topologies. In this sample, we use a single node, multiple GPU configuration. Due to the computational complexity of modern deep learning recommender models we recommend using Vertex Training A2 series machines for large models implemented with HugeCTR. The A2 machines can be configured with up to 16 A100 GPUs, 96 vCPUs, and 1,360GBs RAM. Each A100 GPU has 40GB of device memory. These are powerful configurations that can handle complex models with large embeddings.
#
# In this sample we use the `a2-highgpu-4g` machine type.
#
# Both custom training and hyperparameter tuning Vertex AI jobs demonstrated in this notebook are configured to use a [custom training container](https://cloud.google.com/vertex-ai/docs/training/containers-overview). The container is a derivative of [NVIDIA NGC Merlin training container](https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training). The definition of the container image is found in the [Dockefile.hugectr](src/Dockefile.hugectr) file.
#
# ### HugeCTR hyperparameter tuning with Vertex AI
#
# The training module has been instrumented to support [hyperparameter tuning with Vertex AI](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overview). The custom container includes the [cloudml-hypertune package](https://github.com/GoogleCloudPlatform/cloudml-hypertune), which is used to report the results of model evaluations to Vertex AI hypertuning service. The following diagram depicts the training flow implemeted by the training module.
#
#
# <img src="./images/hugectrtrainer.png" alt="Training regimen" style="height:15%; width:40%"/>
#
#
# Note that as of HugeCTR v3.2 release, the `hugectr.inference.InferenceSession.evaluate` method used in the trainer module only supports the *AUC* evaluation metric.
#
# ## 3. Executing Model Training on Vertex AI
#
# This notebook assumes that the Criteo dataset has been preprocessed using the preprocessing workflow detailed in the [01-dataset-preprocessing.ipynb](01-dataset-preprocessing.ipynb) notebook and the resulting Parquet training and validation splits, and the processed data schema have been stored in Google Cloud Storage.
#
# As you walk through the notebook you will execute the following steps:
# - Configure notebook environment settings, including GCP project, compute region, and the GCS locations of training and validation data splits.
# - Build a custom Vertex training container based on NVIDIA NGC Merlin Training container
# - Configure and submit a Vertex custom training job
# - Configure and submit a Vertex hyperparameter training job
# - Retrieve the results of the hyperparameter tuning job
# ## Setup
#
# In this section of the notebook you configure your environment settings, including a GCP project, a GCP compute region, a Vertex AI service account and a Vertex AI staging bucket. You also set the locations of training and validation splits, and their schema as created in the [01-dataset-preprocessing.ipynb](01-dataset-preprocessing.ipynb) notebook.
#
# Make sure to update the below cells with the values reflecting your environment.
# +
import json
import os
import time
import shutil
from google.cloud import aiplatform as vertex_ai
from google.cloud.aiplatform import hyperparameter_tuning as hpt
# +
PROJECT = 'jk-mlops-dev' # Change to your project.
REGION = 'us-central1' # Change to your region.
BUCKET = 'jk-merlin-dev' # Change to your bucket.
VERSION = 'v01'
MODEL_NAME = 'deepfm'
MODEL_DISPLAY_NAME = f'hugectr-{MODEL_NAME}-{VERSION}'
WORKSPACE = f'gs://{BUCKET}/{MODEL_DISPLAY_NAME}'
VERTEX_SA = f'<EMAIL>' # change to your service account.
IMAGE_NAME = 'hugectr-training'
IMAGE_URI = f'gcr.io/{PROJECT}/{IMAGE_NAME}'
DOCKERFILE = 'src/Dockerfile.hugectr'
DATA_ROOT = '/gcs/jk-criteo-bucket/criteo_processed_parquet'
TRAIN_DATA = os.path.join(DATA_ROOT, 'train/_file_list.txt')
VALID_DATA = os.path.join(DATA_ROOT, 'valid/_file_list.txt')
SCHEMA_PATH = os.path.join(DATA_ROOT, 'train/schema.pbtxt')
# -
# ### Initialize Vertex AI SDK
vertex_ai.init(
project=PROJECT,
location=REGION,
staging_bucket=os.path.join(WORKSPACE, 'stg')
)
# ## Submit a Vertex custom training job
#
# In this section of the notebook you define, submit and monitor a Vertex custom training job. As noted in the introduction, the job uses a custom training container that is a derivative of [NVIDIA NGC Merlin training container image](https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training). The custom container image packages the training module which includes a DeepFM model definition - [src/training/hugectr/model.py](src/training/hugectr/model.py) and a training and evaluation workflow" - [src/training/hugectr/task.py](src/training/hugectr/task.py). The custom container image also installs the `cloudml-hypertune` package for integration with Vertex AI hypertuning.
# ### Build a custom training container
# ! cp src/Dockerfile.hugectr src/Dockerfile
# + tags=[]
# ! gcloud builds submit --timeout "2h" --tag {IMAGE_URI} src --machine-type=e2-highcpu-8
# -
# ### Configure a custom training job
#
# The training module accepts a set of parameters that allow you to fine tune the DeepFM model implementation and configure the training workflow. Most of the paramaters exposed by the training module map directly to the settings used in [HugeCTR Python Inteface](https://github.com/NVIDIA-Merlin/HugeCTR/blob/master/docs/python_interface.md#createsolver-method).
#
# - `NUM_EPOCHS`: The training workflow can run in either an epoch mode or a non-epoch mode. When the constant `NUM_EPOCHS` is set to a value greater than zero the model will be trained on the `NUM_EPOCHS` number of full epochs, where an epoch is defined as a single pass through all examples in the training data.
#
# - `MAX_ITERATIONS`" If `NUM_EPOCHS` is set to zero, you must set `MAX_ITERATIONS` to a value greater than zero. `MAX_ITERATIONA` defines the number of batches to train the model on. When `NUM_EPOCHS` is greater than zero `MAX_ITERATIONS` is ignored.
#
# - `EVAL_INTERVAL` and `EVAL_BATCHES`: The model will be evaluated every `EVAL_INTERVAL` training batches using the `EVAL_BATCHES` validation batches during the main training loop. In the current implementation the evaluation metric is `AUC`.
#
# - `EVAL_BATCHES_FINAL`: After the main training loop completes, a final evaluation will be run using the `EVAL_BATCHES_FINAL`. The `AUC` value returned is reported to Vertex AI hypertuner.
#
# - `DISPLAY_INTERVAL`: Training progress will be reported every `DISPLAY_INTERVAL` batches.
#
# - `SNAPSHOT_INTERVAL`: When set to a value greater than zero, a snapshot will be saved every `SNAPSHOT_INTERVAL` batches.
#
# - `PER_GPU_BATCH_SIZE`: Per GPU batch size. This value should be set through experimentation and depends on model architecture, training features, and GPU type. It is highly dependent on device memory available in a particular GPU. In our scenario - DeepFM, Criteo datasets, and A100 GPU - a batch size of 2048 works well.
#
# - `LR`: The base learning rate for the HugeCTR solver.
#
# - `DROPOUT_RATE`: The base dropout rate used in DeepFM dense layers.
#
# - `NUM_WORKERS`: The number of HugeCTR data reader workers that concurrently load data. This value should be estimated through experimentation. **TBD** This is a per GPU value. The default, which works well on A100 GPUs is 12. For the optimal performance `NUM_WORKERS` should be aligned with the number of files (shards) in the training data.
#
# - `SCHEMA`: The path to the `schema.pbtxt` file generated during the transformation phase. It is required to exract the cardinalitites categorical features.
#
# #### Set HugeCTR model and trainer configuration
NUM_EPOCHS = 0
MAX_ITERATIONS = 50000
EVAL_INTERVAL = 1000
EVAL_BATCHES = 500
EVAL_BATCHES_FINAL = 2500
DISPLAY_INTERVAL = 200
SNAPSHOT_INTERVAL = 0
PER_GPU_BATCH_SIZE = 2048
LR = 0.001
DROPOUT_RATE = 0.5
NUM_WORKERS = 12
# #### Set training node configuration
#
# As described in the overview, we use a single node, multiple GPU worker pool configuration. For a complex deep learning model like DeepFM, we recommend using A2 machines that are equipped with A100 GPUs.
#
# In this sample we use a `a2-highgpu-4g` machine. For production systems, you may consider even more powerful configurations - `a2-highgpu-8g` with 8 A100 GPUs or `a2-megagpu-16g` with 16 A100 GPUs.
MACHINE_TYPE = 'a2-highgpu-4g'
ACCELERATOR_TYPE = 'NVIDIA_TESLA_A100'
ACCELERATOR_NUM = 4
# #### Configure worker pool specifications
#
# In this cell we configure a worker pool specification for a Vertex Custom Training job. Refer to [Vertex AI Training documentation](https://cloud.google.com/vertex-ai/docs/training/create-custom-job) for more details.
# +
gpus = json.dumps([list(range(ACCELERATOR_NUM))]).replace(' ','')
worker_pool_specs = [
{
"machine_spec": {
"machine_type": MACHINE_TYPE,
"accelerator_type": ACCELERATOR_TYPE,
"accelerator_count": ACCELERATOR_NUM,
},
"replica_count": 1,
"container_spec": {
"image_uri": IMAGE_URI,
"command": ["python", "-m", "task"],
"args": [
f'--per_gpu_batch_size={PER_GPU_BATCH_SIZE}',
f'--model_name={MODEL_NAME}',
f'--train_data={TRAIN_DATA}',
f'--valid_data={VALID_DATA}',
f'--schema={SCHEMA_PATH}',
f'--max_iter={MAX_ITERATIONS}',
f'--max_eval_batches={EVAL_BATCHES}',
f'--eval_batches={EVAL_BATCHES_FINAL}',
f'--dropout_rate={DROPOUT_RATE}',
f'--lr={LR}',
f'--num_workers={NUM_WORKERS}',
f'--num_epochs={NUM_EPOCHS}',
f'--eval_interval={EVAL_INTERVAL}',
f'--snapshot={SNAPSHOT_INTERVAL}',
f'--display_interval={DISPLAY_INTERVAL}',
f'--gpus={gpus}',
],
},
}
]
# -
# ### Submit and monitor the job
#
# When submitting a training job using the `aiplatfom.CustomJob` API you can configure the `job.run` function to block till the job completes or return control to the notebook immediately after the job is submitted. You control it with the `sync` argument.
# +
job_name = 'HUGECTR_{}'.format(time.strftime("%Y%m%d_%H%M%S"))
base_output_dir = os.path.join(WORKSPACE, job_name)
job = vertex_ai.CustomJob(
display_name=job_name,
worker_pool_specs=worker_pool_specs,
base_output_dir=base_output_dir
)
job.run(
sync=True,
service_account=VERTEX_SA,
restart_job_on_worker_restart=False
)
# -
# ## Submit and monitor a Vertex hyperparameter tuning job
#
# In this section of the notebook, you configure, submit and monitor a [Vertex AI raining hyperparameter tuning](https://cloud.google.com/vertex-ai/docs/training/hyperparameter-tuning-overview) job. We will demonstrate how to use Vertex Training hyperparameter tuning to find optimal values for the base learning rate and the dropout ratio. This example can be easily extended to other parameters - e.g. the batch size or even the optimizer type.
#
# As noted in the overview, the training module has been instrumented to integrate with Vertex AI Training hypertuning. After the final evaluation is completed, the AUC value calculated on the `EVAL_BATCHES_FINAL` number of batches from the validation dataset is reported to Vertex AI Training using the `report_hyperparameter_tuning_metric`. When the training module is executed in the context of a Vertex Custom Job this code path has no effect. When used with a Vertex AI Training hyperparameter job, the job is configured to use the AUC as a metric to optimize.
#
#
#
# ### Configure a hyperparameter job
#
# To prepare a Vertex Training hyperparameter tuning job you need to configure a worker pool specificatin and a hyperparameter study configuration. Configuring a worker pool is virtually the same as with a Custom Job. The only difference is that you don't explicitly pass values of hyperparameters being tuned to a training container. They will be provided by the hyperparameter tuning service. In our case, we don't set the `lr` and `dropout_ratio`.
#
# To configure a hyperparameter study you need to define a metric to optimize, an optimization goal, and a set of configurations for hyperparameters to tune.
#
# In our case the metric is AUC, the optimization goal is to maximize AUC, and the hyperparameters are `lr`, and `dropout_rate`. Notice that you have to match the name of the metric with the name used to report the metric in the training module. You also have to match the names of the hyperparameters with the respective names of command line parameters in your training container.
#
# For each hyperparameter you specify a strategy to apply for sampling values from the hyperparameter's domain. For the `lr` hyperparameter we configure the tuning service to sample values from a continuous range between 0.001 to 0.01 using a logarithmic scale. For the `dropout_rate` we provide a list of discrete values to choose from.
#
# For more information about configuring a hyperparameter study refer to [Vertex AI Hyperparameter job configuration](https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning).
# +
metric_spec = {'AUC': 'maximize'}
parameter_spec = {
'lr': hpt.DoubleParameterSpec(min=0.001, max=0.01, scale='log'),
'dropout_rate': hpt.DiscreteParameterSpec(values=[0.4, 0.5, 0.6], scale=None),
}
# -
# ### Submit and monitor the job
#
# We can now submit a hyperparameter tuning job. When submitting the job you specify a maximum number of trials to attempt and how many trials to run in parallel.
# +
job_name = 'HUGECTR_HTUNING_{}'.format(time.strftime("%Y%m%d_%H%M%S"))
base_output_dir = os.path.join(WORKSPACE, "model_training", job_name)
custom_job = vertex_ai.CustomJob(
display_name=job_name,
worker_pool_specs=worker_pool_specs,
base_output_dir=base_output_dir
)
hp_job = vertex_ai.HyperparameterTuningJob(
display_name=job_name,
custom_job=custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=4,
parallel_trial_count=2,
search_algorithm=None)
hp_job.run(
sync=True,
service_account=VERTEX_SA,
restart_job_on_worker_restart=False
)
# -
# ### Retrieve trial results
#
# After a hyperparameter tuning job completes you can retrieve the trial results from the job object. The results are returned as a list of trial records. To retrieve the trial with the best value of a metric - AUC - you need to scan through all trial records.
hp_job.trials
# #### Find the best trial
# +
best_trial = sorted(hp_job.trials,
key=lambda trial: trial.final_measurement.metrics[0].value,
reverse=True)[0]
print("Best trial ID:", best_trial.id)
print(" AUC:", best_trial.final_measurement.metrics[0].value)
print(" LR:", best_trial.parameters[1].value)
print(" Dropout rate:", best_trial.parameters[0].value)
# -
# ## Next Steps
# After completing this notebook you can proceed to the [03-inference-triton-huygectr.ipynb](03-inference-triton-huygectr.ipynb) notebook that demonstrates how to deploy the DeepFM model trained in this notebook using NVIDIA Triton Server.
| 02-model-training-hugectr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.2 64-bit (''handsonml'': conda)'
# name: python38264bithandsonmlconda23b04c2097f7484db9019fc72e06cb0e
# ---
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
def generate_time_series(batch_size, n_steps):
freq1, freq2, offsets1, offsets2 = np.random.rand(4, batch_size, 1)
time = np.linspace(0, 1, n_steps)
series = 0.5 * np.sin((time - offsets1) * (freq1 * 10 + 10)) # wave 1
series += 0.2 * np.sin((time - offsets2) * (freq2 * 20 + 20)) # wave 2
series += 0.1 * (np.random.rand(batch_size, n_steps) - 0.5) # noise
return series[..., np.newaxis].astype(np.float32)
n_steps = 50
series = generate_time_series(10000, n_steps + 1)
X_train, y_train = series[:7000, :n_steps], series[:7000, -1]
X_valid, y_valid = series[7000:9000, :n_steps], series[7000:9000, -1]
X_test, y_test = series[9000:, :n_steps], series[9000:, -1]
# + tags=[]
y_pred = X_valid[:, -1]
mse_baseline = np.mean(keras.losses.mean_squared_error(y_valid, y_pred))
print(mse_baseline)
# + tags=[]
model_linear = keras.models.Sequential([
keras.layers.Flatten(input_shape=[50, 1]),
keras.layers.Dense(1)
])
model_linear.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error'])
model_linear.summary()
# + tags=[]
model_linear.fit(x=X_train, y=y_train, validation_data=(X_valid, y_valid), epochs=20, verbose=1)
# + tags=[]
model_RNN = keras.models.Sequential([
keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None, 1]),
keras.layers.SimpleRNN(20, return_sequences=True),
keras.layers.SimpleRNN(1)
])
model_RNN.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error'])
model_RNN.summary()
# + tags=[]
model_RNN.fit(x=X_train, y=y_train, validation_data=(X_valid, y_valid), epochs=1, verbose=1)
# + tags=[]
series = generate_time_series(1, n_steps + 10)
X_new, Y_new = series[:, :n_steps], series[:, n_steps:]
X = X_new
for step_ahead in range(10):
y_pred_one = model_RNN.predict(X[:, step_ahead:])[:, np.newaxis, :]
X = np.concatenate([X, y_pred_one], axis=1)
Y_pred = X[:, n_steps:]
print(Y_pred)
# -
series = generate_time_series(10000, n_steps + 10)
X_train, y_train = series[:7000, :n_steps], series[:7000, -10:, 0]
X_valid, y_valid = series[7000:9000, :n_steps], series[7000:9000, -10:, 0]
X_test, y_test = series[9000:, :n_steps], series[9000:, -10:, 0]
# + tags=[]
model_RNN = keras.models.Sequential([
keras.layers.SimpleRNN(20, return_sequences=True, input_shape=[None, 1]),
keras.layers.SimpleRNN(20),
keras.layers.Dense(10)
])
model_RNN.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error'])
model_RNN.summary()
model_RNN.fit(x=X_train, y=y_train, validation_data=(X_valid, y_valid), epochs=10, verbose=1)
# + tags=[]
model_RNN.fit(x=X_train, y=y_train, validation_data=(X_valid, y_valid), epochs=10, verbose=1)
# + tags=[]
Y_pred = model_RNN.predict(X_new)
print(Y_pred)
# -
| chapter15_sequence.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
xrefs_file="../ontology/xref-all.tsv"
obsoletes_file="../ontology/obsoletes_ordo.tsv"
diff_assertions_file="../ontology/reports/report-all-properties.tsv"
df_xrefs=pd.read_csv(xrefs_file,sep="\t")
df_ordo=pd.read_csv(obsoletes_file,sep="\t")
df_ordo['deprecated']=True
df_ordo.head()
df_xrefs[~df_xrefs['?deprecated'].isna()].head()
df_xrefs_ordo=df_xrefs[(df_xrefs['?prefix']=="Orphanet")]
df_xrefs_ordo = df_xrefs_ordo.merge(df_ordo[['?curie','deprecated']], left_on=['?xref'], right_on=['?curie'])
print(len(df_xrefs_ordo))
df_xrefs_ordo.head()
df_xrefs_ordo.to_csv("../ontology/ordo_deprecated.tsv", sep="\t")
# # SSSOM Files
# +
import pandas as pd
from pathlib import Path
from argparse import ArgumentParser
curiemap = {
'OMIM': 'http://omim.org/entry/',
'OMIMPS': 'http://www.omim.org/phenotypicSeries/',
'ORPHA': 'http://www.orpha.net/ORDO/Orphanet_',
'MONDO': 'http://purl.obolibrary.org/obo/MONDO_'
}
def get_dataframe(tsv):
try:
df = pd.read_csv(tsv,sep="\t", comment="#")
df["source"]=Path(tsv).stem
return df
except pd.errors.EmptyDataError:
print("WARNING! ", tsv, " is empty and has been skipped.")
sssom_tsv_path = "/Users/matentzn/ws/mondo/src/ontology/mappings/ordo-omim.sssom.tsv"
df = get_dataframe(sssom_tsv_path)
sssom_files = {}
for index, row in df.iterrows():
object_id = row["object_id"]
prefix = object_id.split(":")[0]
if prefix not in sssom_files:
sssom_files[prefix] = []
sssom_files[prefix].append(row)
for prefix in sssom_files:
if
print(prefix)
print(df.head())
# -
ps = "https://omim.org/phenotypicSeriesTitles/all?format=tsv"
mim2gene = "https://omim.org/static/omim/data/mim2gene.txt"
mimTitles = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/mimTitles.txt"
genemap2 = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/genemap2.txt"
morbidmap = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/morbidmap.txt"
import pandas as pd
df_ps = pd.read_csv(ps,sep="\t")
df_ps.head()
import pandas as pd
diff_current_file="../ontology/reports/report-2020-06-30-release.tsv"
diff_compare_file="../ontology/reports/report-2019-06-29-release.tsv"
raw=pd.read_csv(diff_current_file,sep="\t")
raw=raw.fillna("None")
raw2=pd.read_csv(diff_compare_file,sep="\t")
raw2=raw2.fillna("None")
replace=dict()
replace['<http://purl.obolibrary.org/obo/IAO_']="IAO:"
replace['<http://purl.obolibrary.org/obo/mondo#']="mondo:"
replace['<http://purl.org/dc/elements/1.1/']="dce:"
replace['<http://purl.org/dc/terms/']="dc:"
replace['<http://www.geneontology.org/formats/oboInOwl#']="oio:"
replace['<http://www.w3.org/1999/02/22-rdf-syntax-ns#']="rdf:"
replace['<http://www.w3.org/2000/01/rdf-schema#']="rdfs:"
replace['<http://www.w3.org/2002/07/owl#']="owl:"
replace['<http://www.w3.org/2004/02/skos/core#']="skos:"
replace['>']=""
raw2
df2=raw2[['?term','?property','?value','?p','?v']].drop_duplicates()
df2['?property'] = df2['?property'].replace(replace, regex=True)
df2['?p'] = df2['?p'].replace(replace, regex=True)
df1=raw[['?term','?property','?value','?p','?v']].drop_duplicates()
df1['?property'] = df1['?property'].replace(replace, regex=True)
df1['?p'] = df1['?p'].replace(replace, regex=True)
df=pd.concat([df1, df2, df2]).drop_duplicates(keep=False)
df.to_csv("check.tsv",sep="\t")
df[['?term','?property']].drop_duplicates().groupby(['?property']).agg(['count'])
df_d=pd.concat([raw, raw2, raw2]).drop_duplicates(keep=False)
df_d
df_xref=df_d[((df_d['?property']=="<http://www.geneontology.org/formats/oboInOwl#hasDbXref>") & (df_d['?p']!="None") & (df_d['?v']=="MONDO:equivalentTo"))].copy()
df_xref['source'] = df_xref['?value'].str.split(r":", expand=True)[0]
df_xref=df_xref[['?term','source']].drop_duplicates()
df_xref
df_xref.groupby(['source']).agg(['count'])
| src/scripts/mondo_scratch_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# level order traversal
def lot(root, graph):
lst = []
lst.append(root)
while(lst):
print(lst)
new_lst = []
for items in lst:
new_lst.extend(graph[items])
lst = new_lst
total_handshake = 0
def dfs(root, handshake, graph):
global total_handshake
handshake += 1
for items in graph[root]:
dfs(items, handshake, graph)
print(total_handshake, handshake)
total_handshake += handshake
from collections import defaultdict
t = int(input())
for _ in range(t):
graph = defaultdict(list)
root = 0
n = int(input())
lst = list(map(int, input().split()))
for index, value in enumerate(lst):
comm = index+1
if value != 0:
graph[value].append(comm)
else:
root = comm
# print(graph)
hand = 0
fist = 0
for key, value in graph.items():
hand += 1
print(key, value)
lot(root, graph)
dfs(root, -1, graph)
print(total_handshake)
# 3
# 0 1 1
# 2
# 2 0
# 9
# 0 1 2 2 2 5 5 5 5
# 100
# 46 65 78 95 100 24 83 17 7 22 67 74 31 4 36 22 48 96 5 45 75 43 41 89 63 6 51 12 48 56 53 74 58 95 76 63 9 23 20 79 68 72 77 81 83 78 71 46 35 12 25 65 24 66 27 16 31 0 23 39 50 33 3 99 41 61 84 34 28 62 21 27 45 10 13 65 37 58 68 9 52 73 63 19 19 38 42 65 42 70 39 94 34 79 58 59 37 2 73 78
# +
# solution
from collections import deque
# bfs
def calculate(tree):
queue = deque(tree[0])
handshakes = fistbumps = nodes = level = 0
while(queue):
for _ in range(len(queue)):
root = queue.popleft()
handshakes += level
fistbumps += nodes - level
nodes += 1
queue.extend(tree[root])
level += 1
return handshakes, fistbumps
t = int(input())
for _ in range(t):
n = int(input())
superiors = list(map(int, input().split()))
tree = {i : [] for i in range(n+1)}
[tree[s].append(i + 1) for i, s in enumerate(superiors)]
handshakes, bumps = calculate(tree)
print(handshakes, bumps)
| HackerEarth/Comrades - II.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from pptx import Presentation
from pptx.util import Inches
prs = Presentation()
# title_slide_layout = prs.slide_layouts[0]
# slide = prs.slides.add_slide(title_slide_layout)
# title = slide.shapes.title
# subtitle = slide.placeholders[1]
# title.text = "COVID-19 Heat Map"
# subtitle.text = "Exposure Level by Site"
blank_slide_layout = prs.slide_layouts[6]
slide = prs.slides.add_slide(blank_slide_layout)
report = slide.shapes
img_path = 'Legend.png'
left = Inches(1.8)
top = Inches(6.4)
height = Inches(0.8)
pic = slide.shapes.add_picture(img_path, left, top, height=height)
img_path = 'southcamp_risk.png'
left = Inches(0.5)
top = Inches(0.2)
height = Inches(3)
pic = slide.shapes.add_picture(img_path, left, top, height=height)
img_path = 'northcamp_risk.png'
left = Inches(5)
top = Inches(0.2)
height = Inches(2.5)
pic = slide.shapes.add_picture(img_path, left, top, height=height)
img_path = 'ridge_risk.png'
left = Inches(0.2)
top = Inches(3.2)
height = Inches(2.5)
pic = slide.shapes.add_picture(img_path, left, top, height=height)
img_path = 'lifecenter_risk.png'
left = Inches(3.9)
top = Inches(3.5)
height = Inches(2.5)
pic = slide.shapes.add_picture(img_path, left, top, height=height)
img_path = 'bbhville_risk.png'
left = Inches(6.4)
top = Inches(2.9)
height = Inches(1.5)
pic = slide.shapes.add_picture(img_path, left, top, height=height)
img_path = 'therest_risk.png'
left = Inches(6.3)
top = Inches(4.5)
height = Inches(1.5)
pic = slide.shapes.add_picture(img_path, left, top, height=height)
# title_only_slide_layout = prs.slide_layouts[5]
# slide = prs.slides.add_slide(title_only_slide_layout)
# shapes = slide.shapes
# shapes.title.text = 'Adding a Table'
# rows = cols = 2
# left = top = Inches(2.0)
# width = Inches(6.0)
# height = Inches(0.8)
# table = shapes.add_table(rows, cols, left, top, width, height).table
# # set column widths
# table.columns[0].width = Inches(2.0)
# table.columns[1].width = Inches(4.0)
# # write column headings
# table.cell(0, 0).text = 'Foo'
# table.cell(0, 1).text = 'Bar'
# # write body cells
# table.cell(1, 0).text = 'Baz'
# table.cell(1, 1).text = 'Qux'
prs.save('CV19 AUTO HEATMAP (SAMPLE).pptx')
# -
| src/ReportGenPPT.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div style="float: right; margin: 20px 20px 20px 20px"><img src="images/bro.png" width="130px"></div>
#
# # Zeek to Kafka
# This notebook covers how to stream Zeek data using Kafka as a message queue. The setup takes a bit of work but the result will be robust way to stream data from Zeek.
#
# <div style="float: right; margin: 20px 20px 20px 20px"><img src="images/kafka.png" width="180px"></div>
#
# ### Software
# - Zeek Network Monitor: https://www.zeek.org
# - Kafka Zeek Plugin: https://github.com/apache/metron-bro-plugin-kafka
# - Kafka: https://kafka.apache.org
# # Part 1: Streaming data pipeline
# To set some context, our long term plan is to build out a streaming data pipeline. This notebook will help you get started on this path. After completing this notebook you can look at the next steps by viewing our notebooks that use Spark on Zeek output.
# - [Zeek to Spark](https://nbviewer.jupyter.org/github/SuperCowPowers/zat/blob/master/notebooks/Zeek_to_Spark.ipynb)
# - [Zeek to Kafka to Spark](https://nbviewer.jupyter.org/github/SuperCowPowers/zat/blob/master/notebooks/Zeek_to_Kafka_to_Spark.ipynb)
#
# So our streaming pipeline looks conceptually like this.
# <div style="margin: 20px 20px 20px 20px"><img src="images/pipeline.png" width="750px"></div>
#
# - **Kafka Plugin for Zeek**
# - **Publish (provides a nice decoupled architecture)**
# - **Pull/Subscribe to whatever feed you want (http, dns, conn, x509...)**
# - ETL (Extract Transform Load) on the raw message data (parsed data with types)
# - Perform Filtering/Aggregation
# - Data Analysis and Machine Learning
#
# <div style="float: right; margin: 20px 0px 0px 0px"><img src="images/confused.jpg" width="300px"></div>
#
# # Getting Everything Setup
# Things you'll need:
# - A running Zeek network security monitor: https://docs.zeek.org/en/stable/install/install.html
# - The Kafka Plugin for Zeek: https://github.com/apache/metron-bro-plugin-kafka
# - A Kafka Broker: https://kafka.apache.org
#
# The weblinks above do a pretty good job of getting you setup with Zeek, Kafka, and the Kafka plugin. If you already have these thing setup then you're good to go. If not take some time and get both up and running. If you're a bit wacky (like me) and want to set these thing up on a Mac you might check out my notes here [Zeek/Kafka Mac Setup](https://github.com/SuperCowPowers/zat/blob/master/docs/zeek_kafka_mac.md)
#
# ## Systems Check
# Okay now that Zeek with the Kafka Plugin is setup, lets do just a bit of testing to make sure it's all AOK before we get into making a Kafka consumer in Python.
#
# **Test the Zeek Kafka Plugin**
#
# Make sure the Kafka plugin is ready to go by running the follow command on your Zeek instance:
#
# ```
# $ zeek -N Apache::Kafka
# Apache::Kafka - Writes logs to Kafka (dynamic, version 0.3.0)
# ```
#
# **Activate the Kafka Plugin**
#
# There's a good explanation of all the options here (<https://github.com/apache/metron-bro-plugin-kafka>). In my case
# I needed to put a different load command when 'activating' the Kafka plugin in my local.zeek configuration file. Here's what I added to the 'standard' site/local.zeek file.
#
# ```
# @load Apache/Kafka
# redef Kafka::topic_name = "";
# redef Kafka::send_all_active_logs = T;
# redef Kafka::kafka_conf = table(
# ["metadata.broker.list"] = "localhost:9092"
# );
# ```
# - The first line took me a while to figure out
# - The rest is, at least for me, the best setup:
#
# By putting in a blank topic name, all output topics are labeled with the name of their log file. For instance, stuff that goes to dns.log is mapped to the 'dns' Kafka topic, http.log to the 'http' topic, and so on. This was exactly what I wanted.
#
#
# ## Start Kafka
# - Linux: <https://kafka.apache.org/quickstart#quickstart_startserver>
# - Mac: If you installed with Brew it's running as a service
#
# ## Run Zeek
# ```
# $ zeek -i en0 <path to>/local.zeek
# or
# $ zeekctl deploy
# ```
#
# ## Verify messages are in the queue
# ```
# $ kafka-console-consumer --bootstrap-server localhost:9092 --topic dns
# ```
# **After a second or two.. you should start seeing DNS requests/replies coming out.. hit Ctrl-C after you see some.**
# ```
# {"ts":1503513688.232274,"uid":"CdA64S2Z6Xh555","id.orig_h":"192.168.1.7","id.orig_p":58528,"id.resp_h":"192.168.1.1","id.resp_p":53,"proto":"udp","trans_id":43933,"rtt":0.02226,"query":"brian.wylie.is.awesome.tk","qclass":1,"qclass_name":"C_INTERNET","qtype":1,"qtype_name":"A","rcode":0,"rcode_name":"NOERROR","AA":false,"TC":false,"RD":true,"RA":true,"Z":0,"answers":["172.16.17.32","192.168.3.11","172.16.31.10","172.16.31.10","172.16.31.10","192.168.127.12","172.16.31.10","192.168.3.11"],"TTLs":[25.0,25.0,25.0,25.0,25.0,25.0,25.0,25.0],"rejected":false}
# ```
# # If you made it this far you are done!
# <div style="float: left; margin: 20px 20px 20px 20px"><img src="images/whew.jpg" width="300px"></div>
#
# +
# Okay so now that the setup is done lets put together a bit of code to
# process the Kafka 'topics' that are now being streamed from our Zeek instance
# First we create a Kafka Consumer
import json
from kafka import KafkaConsumer
consumer = KafkaConsumer('dns', bootstrap_servers=['localhost:9092'],
value_deserializer=lambda x: json.loads(x.decode('utf-8')))
# +
# Now lets process our Kafka Messages
for message in consumer:
print(message.value)
# Note: This will just loop forever, but here's an
# example of the types of output you'll see
{'ts': 1570120289.692109, 'uid': 'CAdnHRVdI94Upoej7', 'id.orig_h': '192.168.1.7', '...
{'ts': 1570120295.655344, 'uid': 'Ctcv6F2bLT8fB9GOUb', 'id.orig_h': '192.168.1.5', ...
{'ts': 1570120295.663177, 'uid': 'CLrohRNbVWuBecKud', 'id.orig_h': '192.168.1.2', '...
{'ts': 1570120295.765735, 'uid': 'CxhnkA3sMdZcQJ6vf7', 'id.orig_h': '192.168.1.7', '...
{'ts': 1570120295.765745, 'uid': 'CEPF9E4a9WeM1cFlSk', 'id.orig_h': 'fe80::4b8:c380:5a7...
# -
# <div style="float: right; margin: 20px 20px 20px 20px"><img src="images/dynamic.jpg" width="300px"></div>
#
# ## Now What?
# Okay so now we can actually do something useful with our new streaming data, in this case we're going to use some results from our 'Risky Domains' Notebook that computed a risky set of TLDs.
# - [Risky Domain Stats](https://nbviewer.jupyter.org/github/SuperCowPowers/zat/blob/master/notebooks/Risky_Domains.ipynb)
# +
from pprint import pprint
import tldextract
from zat.utils import vt_query
# Create a VirusTotal Query Class
vtq = vt_query.VTQuery()
risky_tlds = set(['info', 'tk', 'xyz', 'online', 'club', 'ru', 'website',
'in', 'ws', 'top', 'site', 'work', 'biz', 'name', 'tech'])
# -
# Now lets process our Kafka 'dns' Messages
for message in consumer:
dns_message = message.value
# Pull out the TLD
query = dns_message['query']
tld = tldextract.extract(query).suffix
# Check if the TLD is in the risky group
if tld in risky_tlds:
# Make the query with the full query
results = vtq.query_url(query)
if results.get('positives'):
print('\nOMG the Network is on Fire!!!')
pprint(results)
# # Part 1: Streaming data pipeline
# Recall that our long term plan is to build out a streaming data pipeline. This notebook has helped you get started on this path. After completing this notebook you can look at the next steps by viewing our notebooks that use Spark on Zeek output.
# - [Zeek to Spark](https://nbviewer.jupyter.org/github/SuperCowPowers/zat/blob/master/notebooks/Zeek_to_Spark.ipynb)
# - [Zeek to Kafka to Spark](https://nbviewer.jupyter.org/github/SuperCowPowers/zat/blob/master/notebooks/Zeek_to_Kafka_to_Spark.ipynb)
#
#
# <div style="margin: 20px 20px 20px 20px"><img src="images/pipeline.png" width="750px"></div>
#
#
# <img align="right" style="padding:20px" src="images/SCP_med.png" width="150">
#
# ## Wrap Up
# Well that's it for this notebook, we setup Zeek with the Kafka plugin and showed a simple use of how we might process the streaming data coming from Kafka.
#
# ### Software
# - Zeek Network Monitor: https://www.zeek.org
# - Kafka Zeek Plugin: https://github.com/apache/metron-bro-plugin-kafka
# - Kafka: https://kafka.apache.org
#
# ## About SuperCowPowers
# The company was formed so that its developers could follow their passion for Python, streaming data pipelines and having fun with data analysis. We also think cows are cool and should be superheros or at least carry around rayguns and burner phones. <a href="https://www.supercowpowers.com" target="_blank">Visit SuperCowPowers</a>
| notebooks/Zeek_to_Kafka.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Registration Framework Components
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Learning Objectives
#
# * Get exposure to the different components in a registration optimization framework and how they are connected
# * Set up and run a complete registration pipeline
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Registration Optimization Framework Overview
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exercise: Components of the registration framework
#
# Study the image registration pipeline below, and replace the `fixMe` the components with their correct values.
#
# + slideshow={"slide_type": "subslide"}
import numpy as np
import itk
from itkwidgets import view
from matplotlib import pyplot as plt
# %matplotlib inline
from ipywidgets import interact
# + slideshow={"slide_type": "subslide"}
# Load and examine our input images
PixelType = itk.ctype('float')
fixed_image = itk.imread('data/BrainProtonDensitySliceBorder20.png', PixelType)
# itk 5.1b1
# plt.imshow(np.asarray(fixed_image))
plt.imshow(itk.array_from_image(fixed_image))
# + slideshow={"slide_type": "subslide"}
moving_image = itk.imread('data/BrainProtonDensitySliceShifted13x17y.png', PixelType)
plt.imshow(itk.array_from_image(fixed_image))
# + slideshow={"slide_type": "subslide"}
# Define our registration components
Dimension = fixed_image.GetImageDimension()
FixedImageType = type(fixed_image)
MovingImageType = type(moving_image)
# itk.D is the 'double' C type
TransformType = itk.TranslationTransform[itk.D, Dimension]
initial_transform = TransformType.New()
optimizer = itk.RegularStepGradientDescentOptimizerv4.New(
learning_rate=4,
minimum_step_length=0.001,
relaxation_factor=0.5,
number_of_iterations=200)
metric = itk.MeanSquaresImageToImageMetricv4[
FixedImageType, MovingImageType].New()
# + slideshow={"slide_type": "subslide"}
# Set up our registration method with its components
registration = itk.ImageRegistrationMethodv4.New(fixed_image=fixMe,
moving_image=fixMe,
metric=fixMe,
optimizer=fixMe,
initial_transform=fixMe)
# + slideshow={"slide_type": "fragment"}
# # %load solutions/6_Registration_Framework_answer1.py
# + slideshow={"slide_type": "subslide"}
# Set the initial parameters for the optimization problem
moving_initial_transform = TransformType.New()
initial_parameters = moving_initial_transform.GetParameters()
# X translation
initial_parameters[0] = 0.0
# X translation
initial_parameters[1] = 0.0
moving_initial_transform.SetParameters(initial_parameters)
registration.SetMovingInitialTransform(moving_initial_transform)
identity_transform = TransformType.New()
identity_transform.SetIdentity()
registration.SetFixedInitialTransform(identity_transform)
# + slideshow={"slide_type": "subslide"}
# Set up multi-resolution registration parameters
# In multi-resolution registration, registration is first performed
# on an image with reduced content. Then the resulting spatial transformation
# is used at the start of optimization at the next level.
# This improves robustness and speed
registration.SetNumberOfLevels(1)
registration.SetSmoothingSigmasPerLevel([0])
registration.SetShrinkFactorsPerLevel([1])
# + slideshow={"slide_type": "subslide"}
# Run the registration!
registration.Update()
# + slideshow={"slide_type": "subslide"}
# Examine the result
transform = registration.GetTransform()
final_parameters = transform.GetParameters()
x_translation = final_parameters[0]
y_translation = final_parameters[1]
number_of_iterations = optimizer.GetCurrentIteration()
best_value = optimizer.GetValue()
print("Result:")
print(" Translation X = " + str(x_translation))
print(" Translation Y = " + str(y_translation))
print(" Iterations = " + str(number_of_iterations))
print(" Metric value = " + str(best_value))
# + slideshow={"slide_type": "subslide"}
# Our resulting transform is a composition, or chaining,
# of the initial transform and the optimized transform
output_transform = itk.CompositeTransform[itk.D, Dimension].New()
output_transform.AddTransform(moving_initial_transform)
output_transform.AddTransform(registration.GetModifiableTransform())
# + slideshow={"slide_type": "subslide"}
resampled_moving_image = itk.resample_image_filter(fixMe,
transform=fixMe,
use_reference_image=True,
default_pixel_value=1,
reference_image=fixMe)
# + slideshow={"slide_type": "fragment"}
# # %load solutions/6_Registration_Framework_answer2.py
# + slideshow={"slide_type": "subslide"}
view(resampled_moving_image, ui_collapsed=True)
# + slideshow={"slide_type": "subslide"}
difference = itk.subtract_image_filter(fixed_image, resampled_moving_image)
view(difference, ui_collapsed=True)
# + slideshow={"slide_type": "subslide"}
original_difference = itk.subtract_image_filter(fixed_image, moving_image)
view(original_difference, ui_collapsed=True)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Volumetric Image Registration that Just Works in Python
#
# [ITK_Example01_SimpleRegistration.ipynb](https://github.com/InsightSoftwareConsortium/ITKElastix/blob/master/examples/ITK_Example01_SimpleRegistration.ipynb)
# [also as a binder](https://mybinder.org/v2/gh/InsightSoftwareConsortium/ITKElastix/master?urlpath=lab/tree/examples%2FITK_Example01_SimpleRegistration.ipynb)
#
# Featuring:
#
# - Easily install, cross-platform Python packages
# - Proven registration method and settings that **just works** for most use cases:
#
# * Multi-resolution
# * Fast mutual-information similarity metric
# * Rigid -> affine -> deformable b-spline transformations
# * Fast adaptive stocastic gradient decent optimization, automatic parameter estimation
# * Also generates resampled moving image by default
# * Intelligent sampling
# * Adjustable to new problems
#
# Install:
#
# ```
# pip install itk-elastix
# ```
#
# Use:
#
# ```
# import itk
#
# # The fixed and moving image can be an itk.Image or a numpy.ndarray
# fixed_image = itk.imread('path/to/fixed_image.mha')
# moving_image = itk.imread('path/to/moving_image.mha')
#
# result_image, result_transform_parameters = itk.elastix_registration_method(fixed_image, moving_image)
# ```
# -
# ### Enjoy ITK!
| 6_Registration_Framework_Components.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/thiago1590/Deep_Learning/blob/master/iris_cruzada.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="ImqLJAwmvs_Q" colab_type="code" outputId="9498c69e-d1e6-40c9-b74b-b21179180d77" colab={"base_uri": "https://localhost:8080/", "height": 33}
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
# + id="EDxt096IwQbI" colab_type="code" colab={}
base = pd.read_csv('/content/drive/My Drive/Deep Learning/Iris/original.csv')
previsores = base.iloc[:, 0:4].values
classe = base.iloc[:, 4].values
# + id="xmx7uDD9wV3t" colab_type="code" colab={}
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
classe = labelencoder.fit_transform(classe)
classe_dummy = np_utils.to_categorical(classe)
# + id="kVKmpUGgxxKA" colab_type="code" colab={}
def criar_rede():
classificador = Sequential()
classificador.add(Dense(units = 4, activation = 'relu', input_dim = 4))
classificador.add(Dense(units = 4, activation = 'relu'))
classificador.add(Dense(units = 3, activation = 'softmax'))
classificador.compile(optimizer = 'adam', loss = 'categorical_crossentropy',
metrics = ['categorical_accuracy'])
return classificador
# + id="GLn-6QYH4Hu1" colab_type="code" outputId="d7e6ef68-1683-4a41-9744-9107438b178b" colab={"base_uri": "https://localhost:8080/", "height": 1000}
classificador = KerasClassifier(build_fn = criar_rede,epochs = 1000, batch_size = 10)
resultados = cross_val_score(estimator=classificador,X = previsores,y = classe, cv = 10, scoring = 'accuracy')
# + id="EbeAzSu_2pYz" colab_type="code" outputId="6b336582-aec5-4322-acf2-d14ee0c09c83" colab={"base_uri": "https://localhost:8080/", "height": 50}
media = resultados.mean()
desvio = resultados.std()
print(media)
desvio
# + id="E1bF0w8n3CrO" colab_type="code" outputId="b36c9e89-edc2-485c-d246-d41e7497e839" colab={"base_uri": "https://localhost:8080/", "height": 50}
resultados
# + id="k1s9x0Gp4JHu" colab_type="code" colab={}
| classificar_iris/iris_cruzada.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Preprocessing - EIT - Machine Learning
#
# ### Copyright (c) 2018, Faststream Technologies
# ### Author: <NAME>
import numpy as np
import pandas as pd
df = pd.read_excel('../assets/EIT Clean.xlsx', skip_rows=[0, 1, 2])
df.head()
# +
to_be_removed = ['res_min_7_8', 'res_max_7_8', 'res_min_8_1',
'res_max_8_1', 'part', 'distance', 'no_electrodes', 'name', 'metric']
for i in range(1, 9):
df[str(i)] = (df['res_min_' + str(i)] + df['res_min_' + str(i)]) / 2
to_be_removed.append('res_min_' + str(i))
to_be_removed.append('res_max_' + str(i))
# -
for i, j in zip(range(1, 9), range(2, 8)):
df[str(i) + '_' + str(j)] = (df['res_min_' + str(i) + '_' + str(j)] + df['res_min_' + str(i) + '_' + str(j)]) / 2
to_be_removed.append('res_min_' + str(i) + '_' + str(j))
to_be_removed.append('res_max_' + str(i) + '_' + str(j))
df['7_8'] = (df['res_min_7_8'] + df['res_max_7_8']) / 2
df['8_1'] = (df['res_min_8_1'] + df['res_max_8_1']) / 2
df.drop(to_be_removed, axis=1, inplace=True)
df.head()
df.columns
df.to_csv('../assets/eit_clean_final.csv', index=False)
| main/eit_real.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Stacks, Queues & Heaps
# © <NAME>, 2019.
#
# ### Stack using Python List
# Stack is a LIFO data structure -- last-in, first-out.
# Use append() to push an item onto the stack.
# Use pop() to remove an item.
my_stack = list()
my_stack.append(4)
my_stack.append(7)
my_stack.append(12)
my_stack.append(19)
print(my_stack)
print(my_stack.pop())
print(my_stack.pop())
print(my_stack)
# ### Stack using List with a Wrapper Class
# We create a Stack class and a full set of Stack methods.
# But the underlying data structure is really a Python List.
# For pop and peek methods we first check whether the stack is empty, to avoid exceptions.
class Stack():
def __init__(self):
self.stack = list()
def push(self, item):
self.stack.append(item)
def pop(self):
if len(self.stack) > 0:
return self.stack.pop()
else:
return None
def peek(self):
if len(self.stack) > 0:
return self.stack[len(self.stack)-1]
else:
return None
def __str__(self):
return str(self.stack)
# ### Test Code for Stack Wrapper Class
my_stack = Stack()
my_stack.push(1)
my_stack.push(3)
my_stack.push(5)
print(my_stack)
print(my_stack.pop())
print(my_stack.peek())
print(my_stack.pop())
print(my_stack.pop())
# ### Stack using Python collections.deque
# deque is more efficient way for using stack in python
# +
from collections import deque
stack = deque()
stack.append("xyz.com/")
stack.append("xyz.com/one")
stack.append("xyz.com/two")
stack.append("xyz.com/three")
print(stack)
stack.pop()
print(stack)
stack.pop()
print(stack)
stack.append("xyz.com/four")
print(stack)
print(stack[-1])
# -
# ---
# ### Queue using Python List
# Queue is a FIFO data structure -- first-in, first-out.
# Deque is a double-ended queue, but we can use it for our queue.
# We use append() to enqueue an item, and popleft() to dequeue an item.
# See [Python docs](https://docs.python.org/3/library/collections.html#collections.deque) for deque.
# Queue using List
# use insert and pop
price_queue = []
price_queue.insert(0,1200.10)
price_queue.insert(0,1200.20)
price_queue.insert(0,1200.30)
price_queue.insert(0,1200.40)
print(price_queue)
price_queue.pop()
print(price_queue)
#
# ### Queue using Python Deque
from collections import deque
my_queue = deque()
my_queue.append(5)
my_queue.append(10)
my_queue.append(15)
print(my_queue)
print(my_queue.popleft())
# ### Fun exercise:
# Write a wrapper class for the Queue class, similar to what we did for Stack, but using Python deque.
# Try adding enqueue, dequeue, and get_size methods.
# ### Python Single-ended Queue Wrapper Class using Deque
# We rename the append method to enqueue, and popleft to dequeue.
# We also add peek and get_size operations.
from collections import deque
class Queue():
def __init__(self):
self.queue = deque()
self.size = 0
def enqueue(self, item):
self.queue.append(item)
self.size += 1
def dequeue(self, item):
if self.size > 0:
self.size -= 1
return self.queue.popleft()
else:
return None
def peek(self):
if self.size > 0:
ret_val = self.queue.popleft()
queue.appendleft(ret_val)
return ret_val
else:
return None
def get_size(self):
return self.size
# ### Python MaxHeap
# A MaxHeap always bubbles the highest value to the top, so it can be removed instantly.
# Public functions: push, peek, pop
# Private functions: __swap, __floatUp, __bubbleDown, __str__.
class MaxHeap:
def __init__(self, items=[]):
super().__init__()
self.heap = [0]
for item in items:
self.heap.append(item)
self.__floatUp(len(self.heap) - 1)
def push(self, data):
self.heap.append(data)
self.__floatUp(len(self.heap) - 1)
def peek(self):
if self.heap[1]:
return self.heap[1]
else:
return False
def pop(self):
if len(self.heap) > 2:
self.__swap(1, len(self.heap) - 1)
max = self.heap.pop()
self.__bubbleDown(1)
elif len(self.heap) == 2:
max = self.heap.pop()
else:
max = False
return max
def __swap(self, i, j):
self.heap[i], self.heap[j] = self.heap[j], self.heap[i]
def __floatUp(self, index):
parent = index//2
if index <= 1:
return
elif self.heap[index] > self.heap[parent]:
self.__swap(index, parent)
self.__floatUp(parent)
def __bubbleDown(self, index):
left = index * 2
right = index * 2 + 1
largest = index
if len(self.heap) > left and self.heap[largest] < self.heap[left]:
largest = left
if len(self.heap) > right and self.heap[largest] < self.heap[right]:
largest = right
if largest != index:
self.__swap(index, largest)
self.__bubbleDown(largest)
def __str__(self):
return str(self.heap)
# ### MaxHeap Test Code
m = MaxHeap([95, 3, 21])
m.push(10)
print(m)
print(m.pop())
print(m.peek())
| data-structures/Stacks, Queues & Heaps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Batch processing with Argo Worfklows
#
# In this notebook we will dive into how you can run batch processing with Argo Workflows and Seldon Core.
#
# Dependencies:
#
# * Seldon core installed as per the docs with an ingress
# * Minio running in your cluster to use as local (s3) object storage
# * Argo Workfklows installed in cluster (and argo CLI for commands)
# ### Setup
#
# #### Install Seldon Core
# Use the notebook to [set-up Seldon Core with Ambassador or Istio Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
#
# Note: If running with KIND you need to make sure do follow [these steps](https://github.com/argoproj/argo/issues/2376#issuecomment-595593237) as workaround to the `/.../docker.sock` known issue.
#
# #### Set up Minio in your cluster
# Use the notebook to [set-up Minio in your cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/minio_setup.html).
#
# #### Create rclone configuration
# In this example, our workflow stages responsible for pulling / pushing data to in-cluster MinIO S3 storage will use `rclone` CLI.
# In order to configure the CLI we will create a following secret:
# %%writefile rclone-config.yaml
apiVersion: v1
kind: Secret
metadata:
name: rclone-config-secret
type: Opaque
stringData:
rclone.conf: |
[cluster-minio]
type = s3
provider = minio
env_auth = false
access_key_id = minioadmin
secret_access_key = minioadmin
endpoint = http://minio.minio-system.svc.cluster.local:9000
# !kubectl apply -n default -f rclone-config.yaml
# #### Install Argo Workflows
# You can follow the instructions from the official [Argo Workflows Documentation](https://github.com/argoproj/argo#quickstart).
#
# You also need to make sure that argo has permissions to create seldon deployments - for this you can create a role:
# %%writefile role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: workflow
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- "*"
- apiGroups:
- "apps"
resources:
- deployments
verbs:
- "*"
- apiGroups:
- ""
resources:
- pods/log
verbs:
- "*"
- apiGroups:
- machinelearning.seldon.io
resources:
- "*"
verbs:
- "*"
!!kubectl apply -n default -f role.yaml
# A service account:
# !kubectl create -n default serviceaccount workflow
# And a binding
# !kubectl create rolebinding workflow -n default --role=workflow --serviceaccount=default:workflow
# ### Create some input for our model
#
# We will create a file that will contain the inputs that will be sent to our model
# mkdir -p assets/
import random
import os
random.seed(0)
with open("assets/input-data.txt", "w") as f:
for _ in range(10000):
data = [random.random() for _ in range(4)]
data = "[[" + ", ".join(str(x) for x in data) + "]]\n"
f.write(data)
# #### Check the contents of the file
# !wc -l assets/input-data.txt
# !head assets/input-data.txt
# #### Upload the file to our minio
# !mc mb minio-seldon/data
# !mc cp assets/input-data.txt minio-seldon/data/
# #### Create Argo Workflow
#
# In order to create our argo workflow we have made it simple so you can leverage the power of the helm charts.
#
# Before we dive into the contents of the full helm chart, let's first give it a try with some of the settings.
#
# We will run a batch job that will set up a Seldon Deployment with 10 replicas and 100 batch client workers to send requests.
# !helm template seldon-batch-workflow helm-charts/seldon-batch-workflow/ \
# --set workflow.name=seldon-batch-process \
# --set seldonDeployment.name=sklearn \
# --set seldonDeployment.replicas=10 \
# --set seldonDeployment.serverWorkers=1 \
# --set seldonDeployment.serverThreads=10 \
# --set batchWorker.workers=100 \
# --set batchWorker.payloadType=ndarray \
# --set batchWorker.dataType=data \
# | argo submit --serviceaccount workflow -
# !argo list -n default
# !argo get -n default seldon-batch-process
# !argo -n default logs seldon-batch-process
# ### Check output in object store
#
# We can now visualise the output that we obtained in the object store.
#
# First we can check that the file is present:
import json
# wf_arr = !argo get -n default seldon-batch-process -o json
wf = json.loads("".join(wf_arr))
WF_UID = wf["metadata"]["uid"]
print(f"Workflow UID is {WF_UID}")
# !mc ls minio-seldon/data/output-data-"$WF_UID".txt
# Now we can output the contents of the file created using the `mc head` command.
# !mc cp minio-seldon/data/output-data-"$WF_UID".txt assets/output-data.txt
# !head assets/output-data.txt
# !argo delete -n default seldon-batch-process
| examples/batch/argo-workflows-batch/README.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 不是列表子集求和的最小整数
#
# [](https://github.com/mjd507)
# [](https://mp.weixin.qq.com/s/XxKKHtyEJn9TUZPdFWZBPg)
#
# 
#
# 给定一组有序正整数,寻找无法由列表子集求和得到的最小正整数。
# ## 用例说明
#
# ```python
# Input: [1, 2, 3, 8, 9, 10]
# Output: 7
# ```
def findSmallest(nums: list) -> int:
result = 1
for item in nums:
if item <= result:
result += item
else:
break
return result
print(findSmallest([1, 2, 3, 8, 9, 10]))
| March/Week12/83.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import scipy
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
from lightgbm import LGBMClassifier
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score, KFold, train_test_split
from sklearn.metrics import f1_score, roc_auc_score, accuracy_score, confusion_matrix, precision_score, recall_score
from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier, BaseEnsemble, ExtraTreesClassifier, GradientBoostingClassifier, RandomForestClassifier, VotingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from catboost import CatBoostClassifier
import eli5
import shap
from sklearn import model_selection
pd.options.display.max_columns = 1000
pd.options.display.max_rows = 1000
# -
data = pd.read_csv('../data/data_full_final.csv')
train_embedding = pd.read_csv('../data/train_AE_embeddings.csv')
test_embedding = pd.read_csv('../data/test_AE_embeddings.csv')
data_embedding = pd.concat([train_embedding,test_embedding],axis=0)
for col in data_embedding.columns:
data[col] = data_embedding[col].values
data_embedding.shape
data.shape
cols = list(data.columns)
cols.remove('RESULT')
from sklearn.preprocessing import MinMaxScaler, StandardScaler
clf = MinMaxScaler((0,1))
data_scaled = clf.fit_transform(data[cols])
from sklearn.cluster import KMeans
distortions = []
K = np.arange(10,100,10)
for k in K:
kmeanModel = KMeans(n_clusters=int(k)).fit(data_scaled)
distortions.append(kmeanModel.inertia_)
print (k)
import matplotlib.pyplot as plt
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
kmeanModel = KMeans(n_clusters=int(20)).fit(data_scaled)
data['cluster_id'] = kmeanModel.labels_
data.groupby(['cluster_id'])['RESULT'].value_counts(normalize=True)
data.cluster_id.value_counts()
| Mortgage Propensity Modelling/notebooks/clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# # Observable Trends
#
# 1.) As displayed in the City Latitude vs Max Temperature plot, you can see that as you travel away from the equator the temperature decreases. This is magnified when looking at the latitude vs temperature in the northern hemisphere. There is a strong negative relationship, with an r coefficient value of -0.89, meaning the temperature decreases as you travel farther north.
#
#
# 2.) There is no observable correlation between latitude and wind speed for either of the hemispheres. This is demonstrated by the very weak r values of -.06 and -.12.
#
#
# 3.) A weak to moderate relationship can be seen between latitude and humidity, with r values of .39 in the northern hemisphere and .44 in the south.
#
#
# 4.) Even though the data displays weak to no relationship between latitude and humidity, cloudiness, and wind speed, this is only a representation of weather across the world in a single day. We could gain further insight by viewing weather patterns over a longer period of time.
#
#
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "../output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
# ## Generate Cities List
# +
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
total = len(cities)
print(f"We have found a total of {total} cities.")
# -
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
# create empty lists for location info & weather variables
found_cities = []
country = []
lat = []
lng = []
max_temp = []
humid = []
clouds = []
wind_speed = []
date = []
# +
# loop through cities list to build out json response and print log
url = f"http://api.openweathermap.org/data/2.5/weather?appid={weather_api_key}&units=imperial&q="
print("Beginning Data Retrieval ")
print("-----------------------------")
for index, city in enumerate(cities):
try:
query = f"{url}{city}"
response = requests.get(query)
result = response.json()
# append city & weather variable data to lists
country.append(result["sys"]["country"])
lat.append(result["coord"]["lat"])
lng.append(result["coord"]["lon"])
max_temp.append(result["main"]["temp_max"])
humid.append(result["main"]["humidity"])
clouds.append(result["clouds"]["all"])
wind_speed.append(result["wind"]["speed"])
date.append(result["dt"])
# increase index and print process statement
index += 1
print(f"Processing Record #{index} | {city}")
found_cities.append(city)
except:
index += 1
print(f"Record #{index} not found. Skipping...")
print("-----------------------------")
print("Data Retrieval Complete ")
print("-----------------------------")
# -
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
# +
# create dataframe
weather_df = pd.DataFrame({
"City": found_cities,
"Country": country,
"Latitude": lat,
"Longitude": lng,
"Max Temp (F)": max_temp,
"Humidity (%)": humid,
"Cloudiness (%)": clouds,
"Wind Speed (mph)": wind_speed,
"Date": date
})
weather_df
# -
# export to csv
csv_path = "../output_data/weather.csv"
weather_csv = weather_df.to_csv(csv_path, index=False)
weather_df.count()
weather_df.describe()
# ## Inspect the data and remove the cities where the humidity > 100%.
# ----
# Skip this step if there are no cities that have humidity > 100%.
# Get the indices of cities that have humidity over 100%.
humid_df = weather_df.loc[weather_df["Humidity (%)"] > 100].index
humid_df
# +
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
city_df = weather_df.drop(humid_df, inplace=False)
city_path = "../output_data/cities.csv"
cities_csv = city_df.to_csv(city_path, index=False)
# wasn't actually necessary, humidity cannot exceed 100% - that is called rain.
# -
# ## Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# ## Latitude vs. Temperature Plot
# +
plt.scatter(city_df["Latitude"], city_df["Max Temp (F)"])
plt.ylabel("Max Temperature (F)", fontweight="bold")
plt.xlabel("Latitude", fontweight="bold")
plt.title("City Latitude vs. Temperature (%s)" % time.strftime("%x"), fontweight="bold")
plt.grid(True)
plt.savefig("../output_data/lat_vs_temp.png")
plt.show()
# strftime documentation: https://www.programiz.com/python-programming/datetime/strftime
# -
# The above plot analyzes the relationship between a city's latitude and the max temperature on February 16, 2021. It shows that as you travel away from the equator, the temperature decreases.
# ## Latitude vs. Humidity Plot
plt.scatter(city_df["Latitude"], city_df["Humidity (%)"])
plt.ylabel("Humidity (%)", fontweight="bold")
plt.xlabel("Latitude", fontweight="bold")
plt.title("City Latitude vs. Humidity (%s)" % time.strftime("%x"), fontweight="bold")
plt.grid(True)
plt.savefig("../output_data/lat_vs_humidity.png")
plt.show()
# The above plot analyzes the relationship between a city's latitude and the humidity on February 16, 2021. No clear correlation is shown between latitude and humidity.
# ## Latitude vs. Cloudiness Plot
plt.scatter(city_df["Latitude"], city_df["Cloudiness (%)"])
plt.ylabel("Cloudiness (%)", fontweight="bold")
plt.xlabel("Latitude", fontweight="bold")
plt.title("City Latitude vs. Cloudiness (%s)" % time.strftime("%x"), fontweight="bold")
plt.grid(True)
plt.savefig("../output_data/lat_vs_clouds.png")
plt.show()
# The above plot analyzes the relationship between a city's latitude and cloudiness on February 16, 2021. There does not seem to be any correlation between latitude and cloudiness.
# ## Latitude vs. Wind Speed Plot
plt.scatter(city_df["Latitude"], city_df["Wind Speed (mph)"])
plt.ylabel("Wind Speed (mph)", fontweight="bold")
plt.xlabel("Latitude", fontweight="bold")
plt.title("City Latitude vs. Wind Speed (%s)" % time.strftime("%x"), fontweight="bold")
plt.grid(True)
plt.savefig("../output_data/lat_vs_wind.png")
plt.show()
# The above plot analyzes the relationship between a city's latitude and wind speed on February 16, 2021. Again, the data does not display any clear correlation between latitude and wind speed.
# ## Linear Regression
# +
# create northern & southern city dataframes
north_df = city_df.loc[weather_df["Latitude"] > 0]
south_df = city_df.loc[weather_df["Latitude"] < 0]
# -
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
# linear reg calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_df["Latitude"], north_df["Max Temp (F)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * north_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Northern Hemisphere Max Temp vs. Latitude is r = {rvalue}")
# plot, label, and annotate
# plt.figure(figsize = (10,8))
plt.scatter(north_df["Latitude"], north_df["Max Temp (F)"])
plt.xlabel("Latitude")
plt.ylabel("Max Temp (F)")
plt.title("Northern Hemisphere Max Temp vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(north_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (10,-20), fontsize=14, color="red")
plt.savefig("../output_data/north_hem_vs_maxtemp.png")
plt.show()
# -
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
# linear regr calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_df["Latitude"], south_df["Max Temp (F)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * south_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Southern Hemisphere Max Temp vs. Latitude is r = {rvalue}")
# plot, label, and annotate
plt.scatter(south_df["Latitude"], south_df["Max Temp (F)"])
plt.xlabel("Latitude")
plt.ylabel("Max Temp (F)")
plt.title("Southern Hemisphere Max Temp vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(south_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (-25,52), fontsize=14, color="red")
plt.savefig("../output_data/south_hem_vs_maxtemp.png")
plt.show()
# -
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
# linear reg calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_df["Latitude"], north_df["Humidity (%)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * north_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Northern Hemisphere Humidity (%) vs. Latitude is r = {rvalue}")
# plot, label, and annotate
# plt.figure(figsize = (10,8))
plt.scatter(north_df["Latitude"], north_df["Humidity (%)"])
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.title("Northern Hemisphere Humidity vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(north_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (40,20), fontsize=14, color="red")
plt.savefig("../output_data/north_hem_vs_humidity.png")
plt.show()
# -
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
# linear regr calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_df["Latitude"], south_df["Humidity (%)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * south_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Southern Hemisphere Humidity (%) vs. Latitude is r = {rvalue}")
# plot, label, and annotate
plt.scatter(south_df["Latitude"], south_df["Humidity (%)"])
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.title("Southern Hemisphere Humidity vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(south_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (-22,30), fontsize=14, color="red")
plt.savefig("../output_data/south_hem_vs_humidity.png")
plt.show()
# -
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
# linear reg calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_df["Latitude"], north_df["Cloudiness (%)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * north_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Northern Hemisphere Cloudiness (%) vs. Latitude is r = {rvalue}")
# plot, label, and annotate
# plt.figure(figsize = (10,8))
plt.scatter(north_df["Latitude"], north_df["Cloudiness (%)"])
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.title("Northern Hemisphere Cloudiness vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(north_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (43,31), fontsize=14, color="red")
plt.savefig("../output_data/north_hem_vs_clouds.png")
plt.show()
# -
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
# linear regr calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_df["Latitude"], south_df["Cloudiness (%)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * south_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Southern Hemisphere Cloudiness (%) vs. Latitude is r = {rvalue}")
# plot, label, and annotate
plt.scatter(south_df["Latitude"], south_df["Cloudiness (%)"])
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.title("Southern Hemisphere Cloudiness vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(south_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (-50,57), fontsize=14, color="red")
plt.savefig("../output_data/south_hem_vs_clouds.png")
plt.show()
# -
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
# linear reg calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(north_df["Latitude"], north_df["Wind Speed (mph)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * north_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Northern Hemisphere Wind Speed (mph) vs. Latitude is r = {rvalue}")
# plot, label, and annotate
# plt.figure(figsize = (10,8))
plt.scatter(north_df["Latitude"], north_df["Wind Speed (mph)"])
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.title("Northern Hemisphere Wind Speed vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(north_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (40,28), fontsize=14, color="red")
plt.savefig("../output_data/north_hem_vs_wind.png")
plt.show()
# -
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
# linear regr calculations
(slope, intercept, rvalue, pvalue, stderr) = linregress(south_df["Latitude"], south_df["Wind Speed (mph)"])
slope = round(slope, 2)
intercept = round(intercept, 2)
rvalue = round(rvalue, 2)
regress_values = (slope * south_df["Latitude"]) + intercept
line_eq = f"y = {slope}x + {intercept}"
# print r value statement
print(f"The correlation coefficient between the Southern Hemisphere Wind Speed (mph) vs. Latitude is r = {rvalue}")
# plot, label, and annotate
plt.scatter(south_df["Latitude"], south_df["Wind Speed (mph)"])
plt.xlabel("Latitude")
plt.ylabel("Wind Speed (mph)")
plt.title("Southern Hemisphere Wind Speed vs. Latitude (%s)" % time.strftime("%x"))
plt.plot(south_df["Latitude"], regress_values, "r-")
plt.annotate(line_eq, (-50,18), fontsize=14, color="red")
plt.savefig("../output_data/south_hem_vs_wind.png")
plt.show()
# -
| WeatherPy/WeatherPy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DAT210x - Programming with Python for DS
# ## Module3 - Lab6
# %matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
# +
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
# -
# Load up the wheat seeds dataset into a dataframe. We've stored a copy in the Datasets directory.
# .. your code here ..
import numpy as np
df = pd.read_csv('C:\\Users\\mirfa\\Desktop\\Irfan\\Datascience\\EDX\\DAT210x-master\\DAT210x-master\\Module3\\Datasets/wheat.data', sep=',', na_values=["?"])
df=df.drop('id',axis =1)
df.head()
# If you loaded the `id` column as a feature (hint: _you shouldn't have!_), then be sure to drop it.
# .. your code here ..
df.corr()
# Compute the correlation matrix of your dataframe:
# +
# .. your code here ..
import matplotlib.pyplot as plt
plt.imshow(df.corr(), cmap=plt.cm.Blues, interpolation='nearest')
plt.colorbar()
tick_marks = [i for i in range(len(df.columns))]
plt.xticks(tick_marks, df.columns, rotation='vertical')
plt.yticks(tick_marks, df.columns)
plt.show()
# -
# Graph the correlation matrix using `imshow` or `matshow`:
# +
# .. your code here ..
# -
# Display the graphs:
plt.show()
| Module3/Module3 - Lab6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/python-disk
import os
import sys
import requests
# If you are using a Jupyter notebook, uncomment the following line.
# # %matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image
from io import BytesIO
subscription_key = 'e6a8b2a993a04ec5aee19d8d890911ab'
endpoint = 'https://nabila-cv.cognitiveservices.azure.com/'
analyze_url = endpoint + "vision/v3.1/analyze"
# +
# Add your Computer Vision subscription key and endpoint to your environment variables.
if 'COMPUTER_VISION_SUBSCRIPTION_KEY' in os.environ:
subscription_key = os.environ['COMPUTER_VISION_SUBSCRIPTION_KEY']
else:
print("\nSet the COMPUTER_VISION_SUBSCRIPTION_KEY environment variable.\n**Restart your shell or IDE for changes to take effect.**")
sys.exit()
if 'COMPUTER_VISION_ENDPOINT' in os.environ:
endpoint = os.environ['COMPUTER_VISION_ENDPOINT']
analyze_url = endpoint + "vision/v3.1/analyze"
# -
# !ls
# +
# Set image_path to the local path of an image that you want to analyze.
# Sample images are here, if needed:
# https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/Images
image_path = "banana.jpg"
# Read the image into a byte array
image_data = open(image_path, "rb").read()
headers = {'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/octet-stream'}
params = {'visualFeatures': 'Categories,Description,Color'}
response = requests.post(
analyze_url, headers=headers, params=params, data=image_data)
response.raise_for_status()
# +
# The 'analysis' object contains various fields that describe the image. The most
# relevant caption for the image is obtained from the 'description' property.
analysis = response.json()
print(analysis)
image_caption = analysis["description"]["captions"][0]["text"].capitalize()
# Display the image and overlay it with the caption.
image = Image.open(BytesIO(image_data))
plt.imshow(image)
plt.axis("off")
_ = plt.title(image_caption, size="x-large", y=-0.1)
plt.show()
# -
(analysis)['description']['tags'][0]
analysis.keys()
tags_result_remote = computervision_client.tag_image(remote_image_url )
| azure/img_classification_cognitiveservices_cv.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.insert(1, 'C:/Users/peter/Desktop/volatility-forecasting/midas')
import numpy as np
import pandas as pd
from base import BaseModel
from stats import loglikelihood_normal
from datetime import timedelta, datetime
from monthdelta import monthdelta
import time
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
# +
class EWMA(BaseModel):
def __init__(self, plot = True, lam = 0.94, *args):
self.plot = plot
self.lam = 0.94
self.args = args
def initialize_params(self, y):
self.init_params = np.array([self.lam])
return self.init_params
def model_filter(self, params, y):
T = y.shape[0]
sigma2 = np.zeros(T)
lamb = params
for t in range(T):
if t == 0:
sigma2[t] = 1.0
else:
sigma2[t] = lamb * sigma2[t - 1] + (1 - lamb) * y[t - 1] ** 2
return sigma2
def loglikelihood(self, params, y):
sigma2 = self.model_filter(params, y)
return loglikelihood_normal(y, sigma2)
def simulate(self, lamb, T):
sigma2 = np.zeros(T)
ret = np.zeros(T)
for t in range(T):
if t == 0:
sigma2[t] = 1.0
else:
sigma2[t] = lamb * sigma2[t - 1] + (1 - lamb) * ret[t - 1] ** 2
ret[t] = np.random.normal(scale = np.sqrt(sigma2[t]))
return ret, sigma2
class Panel_EWMA(BaseModel):
def __init__(self, plot = True, lam = 0.94, *args):
self.plot = plot
self.lam = 0.94
self.args = args
def initialize_params(self, y):
self.init_params = np.array([self.lam])
return self.init_params
def model_filter(self, params, y):
T = y.shape[0]
sigma2 = np.zeros(T)
lamb = params
for t in range(T):
if t == 0:
sigma2[t] = 1.0
else:
sigma2[t] = lamb * sigma2[t - 1] + (1 - lamb) * y[t - 1] ** 2
return sigma2
def loglikelihood(self, params, y):
lls = 0
for i in range(y.shape[1]):
idx = np.where(np.isnan(y.iloc[:, i]) == False)[0]
sig = self.model_filter(params, y.iloc[idx, i].values)
if len(sig) == 0:
lls += 0
else:
lls += loglikelihood_normal(y.iloc[idx, i].values, sig)
return lls
def forecast(self, y):
row_nul = pd.DataFrame([[0]*y.shape[1]], columns = y.columns)
y = y.append(row_nul)
forecast = np.zeros(len(y.columns))
for i in range(ret_mat.shape[1]):
idx = np.where(np.isnan(y.iloc[:, i]) == False)[0]
if len(idx) == 0:
forecast[i] = np.nan
else:
sig = model.model_filter(model.optimized_params, y.iloc[idx, i].values)
forecast[i] = sig[-1]
return forecast
def simulate(self, lamb = 0.94, T = 500, num = 100):
sigma2 = np.zeros((T, num))
r = np.zeros((T, num))
for t in range(T):
if t == 0:
sigma2[t] = 1.0
else:
sigma2[t] = lamb * sigma2[t - 1] + (1 - lamb) * r[t - 1] ** 2
r[t] = np.random.normal(0.0, np.sqrt(sigma2[t]), size = num)
return r, sigma2
# -
model = EWMA()
y, sigma2 = model.simulate(lamb = 0.94, T = 1000)
model.fit(['01'], y)
def create_sim(num_of_sim = 500, length = 1000, param = 0.94, plot = False):
lamb = np.zeros(num_of_sim)
model = EWMA(plot = plot)
for t in range(num_of_sim):
y, _ = model.simulate(lamb = param, T = length)
model.fit(['01'], y)
lamb[t] = model.optimized_params
return pd.DataFrame(data = lamb)
sim1 = create_sim(length=500)
sim2 = create_sim()
sim3 = create_sim(length = 2000)
# +
lamb1 = sm.nonparametric.KDEUnivariate(sim1.values).fit()
lamb2 = sm.nonparametric.KDEUnivariate(sim2.values).fit()
lamb3 = sm.nonparametric.KDEUnivariate(sim3.values).fit()
fig , ax = plt.subplots(1, 1, figsize=(15, 4), tight_layout=True)
ax.plot(lamb1.support, lamb1.density, lw = 3, label = 'T = 500', zorder = 10)
ax.plot(lamb2.support, lamb2.density, lw = 3, label = 'T = 1000', marker = '^', markevery = 10, zorder = 10)
ax.plot(lamb3.support, lamb3.density, lw = 3, label = 'T = 2000', marker = 'o', markevery = 10, zorder = 10)
ax.set_title(r'$\lambda$'+" (Act = 0.94) parameter's density from different samples size")
ax.grid(True, zorder = -5)
ax.set_xlim((np.min(sim1.values), np.max(sim1.values)))
ax.legend(loc = 'best')
plt.show()
# -
model = Panel_EWMA()
y, _ = model.simulate()
model.fit(['01'], pd.DataFrame(y))
def create_sim(num_of_sim = 500, length = 500, param = 0.94, num_of_asset = 100, plot = False):
lamb = np.zeros(num_of_sim)
model = Panel_EWMA(plot = plot)
for t in range(num_of_sim):
y, _ = model.simulate(lamb = param, T = length, num = num_of_asset)
model.fit(['01'], pd.DataFrame(y))
lamb[t] = model.optimized_params
return pd.DataFrame(data = lamb)
sim11 = create_sim(plot = True)
sim22 = create_sim(num_of_asset = 50, plot = True)
sim33 = create_sim(num_of_asset = 200, plot = True)
# +
lamb1 = sm.nonparametric.KDEUnivariate(sim11.values).fit()
lamb2 = sm.nonparametric.KDEUnivariate(sim22.values).fit()
lamb3 = sm.nonparametric.KDEUnivariate(sim33.values).fit()
fig , ax = plt.subplots(1, 1, figsize=(15, 4), tight_layout=True)
ax.plot(lamb2.support, lamb2.density, lw = 3, label = 'N = 50', zorder = 10)
ax.plot(lamb1.support, lamb1.density, lw = 3, label = 'N = 100', marker = '^', markevery = 10, zorder = 10)
ax.plot(lamb3.support, lamb3.density, lw = 3, label = 'N = 200', marker = 'o', markevery = 10, zorder = 10)
ax.set_title(r'$\lambda$'+" (Act = 0.94) parameter's density from different samples size")
ax.grid(True, zorder = -5)
ax.set_xlim((np.min(sim22.values), np.max(sim22.values)))
ax.legend(loc = 'best')
plt.savefig('C:/Users/peter/Desktop/volatility-forecasting/results/panel_ewma_sim.png')
plt.show()
# +
ret_matrix = pd.read_csv('C:/Users/peter/Desktop/volatility-forecasting/results/ret_matrix.csv')
ret_matrix.set_index(pd.to_datetime(ret_matrix.Date), inplace = True)
ret_matrix = ret_matrix.iloc[:, 1:] * 100
ret_mat = ret_matrix.iloc[1:, :]
nan_cols = np.where(ret_mat.isna().sum().values == 1)[0]
nan_index = np.where(ret_mat.iloc[:, nan_cols].isna() == True)[0]
if len(set(nan_index)) == 1.0:
ret_mat = ret_mat.drop([ret_mat.index[nan_index[0]]])
ret_mat.pop('AMCR')
# -
y = ret_mat[(ret_mat.index >= datetime(1999,12,1)) & (ret_mat.index < datetime(2005, 1, 1))]
model = Panel_EWMA()
model.fit(['01'], y)
| Examples/Panel_EWMA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Read and write CSV and XLS files
import pandas as pd
df = pd.read_csv('covid19.csv')
df
# +
#read excle file
df = pd.read_excel('investment.xlsx')
df
# -
#Write DF to csv
df.to_csv('new.csv')
df.to_csv('new_noIndex.csv',index = False)
#write DF to excel
df.to_excel('new.xlsx', sheet_name = 'investment')
# # GROUP-BY
import pandas as pd
df = pd.read_csv('investment')
| Panda/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
df = pd.read_csv('default.csv')
# -
ageEdu = df.loc[:,['Age', 'Education']]
ageEdu.to_csv('ageEdu.csv', index = False)
ageEarn = df.loc[:,['Age','Yearly Income']]
ageEarnAbove50k = ageEarn.loc[lambda x: x['Yearly Income'] == '>50K']
ageEarnAbove50k.to_csv('ageEarnAbove50k.csv', index = False)
ageEarnBelow50k = ageEarn.loc[lambda x: x['Yearly Income'] == '<=50K']
ageEarnBelow50k.to_csv('ageEarnBelow50k.csv', index = False)
eduWage = df.loc[:,['Yearly Income', 'Education']]
eduWageAbove50k = eduWage.loc[lambda x: x['Yearly Income'] == '>50K']
eduWageAbove50k.to_csv('eduWageAbove50k.csv', index = False)
eduWageBelow50k = eduWage.loc[lambda x: x['Yearly Income'] == '<=50K']
eduWageBelow50k.to_csv('eduWageBelow50k.csv', index = False)
wageOc = df.loc[:,['Occupation', 'Yearly Income']]
wageOcAbove50k = wageOc.loc[lambda x: x['Yearly Income'] == '>50K']
wageOcAbove50k.to_csv('wageOcAbove50k.csv', index = False)
wageOcBelow50k = wageOc.loc[lambda x: x['Yearly Income'] == '<=50K']
wageOcBelow50k.to_csv('wageOcBelow50k.csv', index = False)
occupations = df.loc[:,['Occupation']]
occupations.to_csv('occupations.csv', index = False)
earnSex = df.loc[:,['Sex', 'Yearly Income']]
earnSexAbove50k = earnSex.loc[lambda x: x['Yearly Income'] == '>50K']
earnSexAbove50k.to_csv('earnSexAbove50k.csv',index = False)
earnSexBelow50k = earnSex.loc[lambda x: x['Yearly Income'] == '<=50K']
earnSexBelow50k.to_csv('earnSexBelow50k.csv',index = False)
earnMar = df.loc[:,['Yearly Income', 'Marital Status']]
earnMarAbove50k = earnMar.loc[lambda x: x['Yearly Income'] == '>50K']
earnMarAbove50k.to_csv('earnMarAbove50k.csv',index = False)
earnMarBelow50k = earnMar.loc[lambda x: x['Yearly Income'] == '<=50K']
earnMarBelow50k.to_csv('earnMarBelow50k.csv',index = False)
earnRace = df.loc[:,['Yearly Income', 'Race']]
earnRaceAbove50k = df.loc[lambda x:x['Yearly Income'] == '>50K']
earnRaceAbove50k.to_csv('earnRaceAbove50k.csv',index = False)
earnRaceBelow50k = df.loc[lambda x:x['Yearly Income'] == '<=50K']
earnRaceBelow50k.to_csv('earnRaceBelow50k.csv',index = False)
| data/processed/dataProcessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import ml3
EXP_FOLDER = os.path.join(ml3.__path__[0], "experiments/data/shaped_sine")
# # Shaping Loss Example
# ### Before running visualization please run in Terminal:
# !python run_shaped_sine_exp.py train True
# !python run_shaped_sine_exp.py train False
# %pylab inline
from ml3.shaped_sine_utils import render
from ml3.shaped_sine_utils import plot_loss
def normalize_data(data):
norm_data = []
for d in data:
norm_data.append((d-min(d))/(max(d)-min(d)))
return norm_data
# +
figure(figsize=(10,3))
freq=0.5
theta_ranges, landscape_with_extra, landscape_mse = plot_loss(extra=True,exp_folder=EXP_FOLDER,freq=freq)
subplot(2,2,1)
plt.plot(theta_ranges,landscape_with_extra)
plt.ylabel('Loss')
plt.legend(['Shaped ML$^3$ Landscape'])
plt.axvline(x=freq,c='red')
subplot(2,2,3)
plt.plot(theta_ranges,landscape_mse,c='C1')
plt.xlabel('Theta')
plt.ylabel('Loss')
plt.legend(['MSE Loss Landscape'])
plt.axvline(x=freq,c='red')
theta_ranges, landscape_wo_extra, landscape_mse = plot_loss(extra=False,exp_folder=EXP_FOLDER,freq=freq)
subplot(2,2,2)
plt.plot(theta_ranges,landscape_wo_extra)
plt.ylabel('Loss')
plt.legend(['ML$^3$ Landscape'])
plt.axvline(x=freq,c='red')
subplot(2,2,4)
plt.plot(theta_ranges,landscape_mse,c='C1')
plt.xlabel('Theta')
plt.ylabel('Loss')
plt.legend(['MSE Loss Landscape'])
plt.axvline(x=freq,c='red')
# -
theta_ranges = np.load(f'{EXP_FOLDER}/theta_ranges_True_.npy')
ml3_extra_loss = normalize_data(np.load(f'{EXP_FOLDER}/landscape_with_extra_True_.npy'))
ml3_mse_loss = normalize_data(np.load(f'{EXP_FOLDER}/landscape_mse_False_.npy'))
ml3_not_shaped_loss = normalize_data(np.load(f'{EXP_FOLDER}/landscape_with_extra_False_.npy'))
freq=0.5
render(theta_ranges,ml3_extra_loss,'C0',freq=freq,file_path=f'{EXP_FOLDER}/ml3_shaped_loss_sine.gif')
render(theta_ranges,ml3_not_shaped_loss,'C2',freq=freq,file_path=f'{EXP_FOLDER}/ml3_not_shaped_loss_sine.gif')
render(theta_ranges,ml3_mse_loss,'C1',freq=freq,file_path=f'{EXP_FOLDER}/mse_loss_sine.gif')
| ml3/experiments/Loss shaping visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # basemap
#
# As Randy has shown, matplotlib has a lot of functinality. There are times when you want to take it further. This is especially true when you want to alter geographic projections, plot multiple data sets, and interact with web mapping services.
# +
# %matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Polygon
# -
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=12000000,
height=9000000,
projection='lcc',
resolution=None,
lat_1=45.,
lat_2=55,
lat_0=50,
lon_0=-107.)
m.bluemarble()
plt.show()
# # shaded relief
# +
# %matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=12000000,height=9000000,projection='lcc',
resolution=None,lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.)
m.shadedrelief()
plt.show()
# -
# # drawlsmask()
# +
# %matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=12000000,height=9000000,projection='lcc',
resolution=None,lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.)
m.drawlsmask()
plt.show()
# -
# #### Geographic reference frame: treats the earth as a sphere
# <img src="figures/projectionTypes.png" width = 600>
#
# # projections
#
# experment by changing the useProj variable
#
# # References
#
# this [USGS website](http://egsc.usgs.gov/isb//pubs/MapProjections/projections.html#lambert) is very useful, and I often come back to it.
#
# using this [list](http://matplotlib.org/basemap/users/mapsetup.html) of projections, explore how different projections preform using a [Tissot's indicatrix](https://en.wikipedia.org/wiki/Tissot%27s_indicatrix)
#
#
# setup lambert conformal basemap.
# lat_1 is first standard parallel.
# lat_2 is second standard parallel (defaults to lat_1).
# lon_0,lat_0 is central point.
# rsphere=(6378137.00,6356752.3142) specifies WGS4 ellipsoid
# area_thresh=1000 means don't plot coastline features less
# than 1000 km^2 in area.
m = Basemap(width=12000000,height=9000000,
rsphere=(6378137.00,6356752.3142),\
resolution='l',area_thresh=1000.,projection='lcc',\
lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.)
m.drawcoastlines()
m.fillcontinents(color='coral',lake_color='aqua')
# draw parallels and meridians.
m.drawparallels(np.arange(-80.,81.,20.))
m.drawmeridians(np.arange(-180.,181.,20.))
m.drawmapboundary(fill_color='aqua')
# draw tissot's indicatrix to show distortion.
ax = plt.gca()
for y in np.linspace(m.ymax/20,19*m.ymax/20,9):
for x in np.linspace(m.xmax/20,19*m.xmax/20,12):
lon, lat = m(x,y,inverse=True)
poly = m.tissot(lon,lat,1.5,100,\
facecolor='green',zorder=10,alpha=0.5)
plt.title("Lambert Conformal Projection")
plt.show()
# setup albers equal area conic basemap
# lat_1 is first standard parallel.
# lat_2 is second standard parallel.
# lon_0,lat_0 is central point.
m = Basemap(width=8000000,height=7000000,
resolution='l',projection='aea',\
lat_1=40.,lat_2=60,lon_0=35,lat_0=50)
m.drawcoastlines()
m.drawcountries()
m.fillcontinents(color='coral',lake_color='aqua')
# draw parallels and meridians.
m.drawparallels(np.arange(-80.,81.,20.))
m.drawmeridians(np.arange(-180.,181.,20.))
m.drawmapboundary(fill_color='aqua')
# draw tissot's indicatrix to show distortion.
ax = plt.gca()
for y in np.linspace(m.ymax/20,19*m.ymax/20,10):
for x in np.linspace(m.xmax/20,19*m.xmax/20,12):
lon, lat = m(x,y,inverse=True)
poly = m.tissot(lon,lat,1.25,100,\
facecolor='green',zorder=10,alpha=0.5)
plt.title("Albers Equal Area Projection")
plt.show()
# setup north polar aimuthal equidistant basemap.
# The longitude lon_0 is at 6-o'clock, and the
# latitude circle boundinglat is tangent to the edge
# of the map at lon_0.
m = Basemap(projection='npaeqd',boundinglat=10,lon_0=270,resolution='l')
m.drawcoastlines()
m.fillcontinents(color='coral',lake_color='aqua')
# draw parallels and meridians.
m.drawparallels(np.arange(-80.,81.,20.))
m.drawmeridians(np.arange(-180.,181.,20.))
m.drawmapboundary(fill_color='aqua')
# draw tissot's indicatrix to show distortion.
ax = plt.gca()
for y in np.linspace(m.ymax/20,19*m.ymax/20,10):
for x in np.linspace(m.xmax/20,19*m.xmax/20,10):
lon, lat = m(x,y,inverse=True)
poly = m.tissot(lon,lat,2.5,100,\
facecolor='green',zorder=10,alpha=0.5)
plt.title("North Polar Azimuthal Equidistant Projection")
plt.show()
| docker/notebooks/basemapProjections.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (pommer)
# language: python
# name: pommer
# ---
import pommerman
from pommerman import agents
import numpy as np
import time
# Create a set of agents (exactly four)
agent_list = [
agents.SimpleAgent(),
agents.SimpleAgent(),
agents.SimpleAgent(),
agents.SimpleAgent(),
# agents.DockerAgent("pommerman/simple-agent", port=12345),
]
# Make the "Free-For-All" environment using the agent list
env = pommerman.make('PommeFFACompetition-v0', agent_list)
rewards = []
lengths = []
start_time = time.time()
# Run the episodes just like OpenAI Gym
for i_episode in range(1000):
state = env.reset()
done = False
lens = [None] * 4
t = 0
while not done:
#env.render()
actions = env.act(state)
state, reward, done, info = env.step(actions)
t += 1
for j in range(4):
if lens[j] is None and reward[j] != 0:
lens[j] = t
rewards.append(reward)
lengths.append(lens)
print('Episode {} finished'.format(i_episode))
elapsed = time.time() - start_time
np.mean(rewards, axis=0), np.std(rewards, axis=0)
np.mean(lengths, axis=0), np.std(lengths, axis=0)
np.mean(rewards), np.std(rewards)
np.mean(lengths), np.std(lengths)
total_timesteps = np.sum(np.max(lengths, axis=1))
elapsed, total_timesteps, elapsed / total_timesteps
np.savez_compressed("eval_simple_1000.npz", rewards=rewards, lengths=lengths, elapsed=elapsed, total_timesteps=total_timesteps)
| rl_agent/tambet/Eval SimpleAgent.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Sparse Probit Demo
#
# In this demo, we illustrate how to use the `vampyre` package for a simple generalized linear model (GLM). We build on the [sparse linear inverse demo](sparse_lin_inverse.ipynb) and consider the problem of recovering a sparse vector $z_0$ from binary measurements of the form:
# $$
# y = \mbox{sign}(z_1), \quad z_1 = Az_0 + w,
# $$
# where $w$ is Gaussian noise. This is similar to the linear inverse problem considered in the [sparse linear inverse demo](sparse_lin_inverse.ipynb), except that we have added a nonlinear output. In statistics, this model is known as a Generalized Linear Model or GLM. The specific form here arises in *sparse probit classification*, where $y$ is a vector of binary class labels $y$, $A$ is a data matrix and $z_0$ is a set of coefficients in the model. The problem is alse used in de-quantization problems in signal processing. The particular parameters in this demo were taken from:
#
# * Schniter, Philip, <NAME>, and <NAME>. "Vector Approximate Message Passing for the Generalized Linear Model." arXiv preprint arXiv:1612.01186 (2016).
#
# In this demo, you will learn to
# * Generate synthetic data for a GLM
# * Set up a GLM as a type of multi-layer network
# * Use the multi-layer VAMP method to perform the GLM estimation
# * Handle multi-column data.
# ## Importing the Package
#
#
# We first import the `vampyre` and other packages as in the [sparse linear inverse demo](sparse_lin_inverse.ipynb).
# +
# Add the vampyre path to the system path
import os
import sys
vp_path = os.path.abspath('../../')
if not vp_path in sys.path:
sys.path.append(vp_path)
import vampyre as vp
# Load the other packages
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ## Generating Synthetic Data
#
# We next generate synthetic data. In this case, we will use multi-column data. First, we set the dimensions.
# +
# Parameters
nz0 = 512 # number of components per column of z0
nz1 = 4096 # number of components per column of z1
ncol = 10 # number of columns
# Compute the shapes
zshape0 = (nz0,ncol) # Shape of z0 matrix
zshape1 = (nz1,ncol) # Shape of z1 matrix
Ashape = (nz1,nz0) # Shape of A matrix
# -
# Next, simliar to the [sparse linear inverse demo](sparse_lin_inverse.ipynb), we create Bernoull-Gaussian data.
# +
# Paramaeters
sparse_rat = 0.1 # sparsity ratio
zmean_act = 0 # mean for the active components
zvar_act = 1 # variance for the active components
snr = 30 # SNR in dB
# Generate the random input
z0 = np.random.normal(zmean_act, np.sqrt(zvar_act), zshape0)
u = np.random.uniform(0, 1, zshape0) < sparse_rat
z0 = z0*u
# -
# Now, we create a random transform $A$ and output $z_1 = Az_0 + w$.
# +
# Random transform
b = np.zeros(zshape1)
A = np.random.normal(0, 1/np.sqrt(nz0), Ashape)
Az0 = A.dot(z0)
# Add noise
wvar = np.mean(np.abs(Az0)**2)*np.power(10, -0.1*snr)
z1 = Az0 + np.random.normal(0,np.sqrt(wvar), zshape1)
# Quantize
thresh = 0
y = (z1 > thresh)
# -
# ## Representing the GLM as a Multi-Layer Network
#
# In the `vampyre` package, we represent the GLM as a special case of *multi-layer* network. Specifically, the joint density of the output $y$ and intermediate variables $z_0$ and $z_1$ has a factorizable structure as:
# $$
# p(z_0,z_1,y) = p(z_0)p(z_1|z_0)p(y|z_1).
# $$
# We represent each of the three terms in the factorization with an estimator:
# * $p(z_0)$: This is the BG prior on $z_0$ and will be represented by the estimator `est_in`.
# * $p(z_1|z_0)$: This is the conditional density representing the linear mapping $z_1 = Az_0 + w$ and will be represented by the estimator `est_trans`.
# * $p(y|z_1)$: This is the output from the hard-thresholding, $y= \mbox{sign}(z_1)$ and is represented by the estimator `est_out`.
# +
# Create estimator for the input prior
map_est = False
est0_gauss = vp.estim.GaussEst(zmean_act,zvar_act,zshape0,map_est=map_est)
est0_dis = vp.estim.DiscreteEst(0,1,zshape0)
est_in = vp.estim.MixEst([est0_gauss,est0_dis],[sparse_rat,1-sparse_rat])
# Estimator for the output
est_out = vp.estim.HardThreshEst(y,zshape1,thresh=thresh)
# Estimtor for the linear transform
Aop = vp.trans.MatrixLT(A,zshape0)
est_lin = vp.estim.LinEstimTwo(Aop,b,wvar)
# Put all estimators in a list
est_list = [est_in,est_lin,est_out]
# -
# We next create a list of message handlers, one for each of the two unknown variables, `z0` and `z1`. Again, we will just use the simple message handlers.
# +
# Create the message handlers
damp=1
msg_hdl0 = vp.estim.MsgHdlSimp(map_est=map_est, shape=zshape0,damp=damp)
msg_hdl1 = vp.estim.MsgHdlSimp(map_est=map_est, shape=zshape1,damp=damp)
# Put the handlers in a list
msg_hdl_list = [msg_hdl0,msg_hdl1]
# -
# ## Running the Multi-Layer VAMP Solver
#
# Having described the input and output estimators and the variance handler, we can now construct a ML-VAMP solver. The construtor takes the list of estimators, `est_list`, list of message handlers, `msg_hdl_list`. Also, similar to VAMP, the solver takes a history list `hist_list` and number of iterations `nit`.
nit = 10 # number of iterations
solver = solver = vp.solver.MLVamp(est_list,msg_hdl_list,comp_cost=True,\
hist_list=['zhat','zhatvar'],nit=nit)
# We now run the solver by calling the `solve()` method. For a small problem like this, this should be close to instantaneous.
solver.solve()
# The VAMP solver estimate is the field `zhat`. Since there are two variables, `zhat` is a list. We extract the estimate `zhat0=zhat[0]` for `z0`. Then, we plot the first column (`icol=0`) of the true data and estimate.
zhat = solver.zhat
zhat0 = zhat[0]
icol = 0 # column to plot
t = np.array(range(nz0))
plt.plot(t,z0[:,icol])
plt.plot(t,zhat0[:,icol])
plt.axis([0,nz0,-3,3])
# Since the probit measurement model is invariant to the scaling of vector `z0`, we measure errors via a debiased normalized MSE computed with the following function.
def debias_mse(zhat,ztrue):
"""
If zhat and ztrue are 1D vectors, the function computes the *debiased normalized MSE* defined as:
dmse_lin = min_c ||ztrue-c*zhat||^2/||ztrue||^2 = (1-|zhat'*ztrue|^2/||ztrue||^2||zhat||^2)
The function returns the value in dB: dmse = 10*log10(dmse_lin)
If zhat and ztrue are matrices, dmse_lin is computed for each column and then averaged over the columns
"""
zcorr = np.abs(np.sum(zhat.conj()*ztrue,axis=0))**2
zhatpow = np.sum(np.abs(zhat)**2,axis=0)
zpow = np.sum(np.abs(ztrue)**2,axis=0)
tol = 1e-8
if np.any(zhatpow < tol) or np.any(zpow < tol):
dmse = 0
else:
dmse = 10*np.log10(np.mean(1 - zcorr/zhatpow/zpow))
return dmse
# We can then measure the debiased normalized MSE of the estimate.
ztrue = [z0,z1]
zhat = solver.zhat
nvar = len(ztrue)
dmse = np.zeros(nvar)
for i in range(nvar):
zhati = zhat[i]
ztruei = ztrue[i]
dmse[i] = debias_mse(zhati,ztruei)
print("z{0:d} d-MSE: {1:7.2f}".format(i, dmse[i]))
# Finally, we can plot the actual and predicted debiased MSE as a function of the iteration number. When `solver` was contructed, we passed an argument `hist_list=['zhat', 'zhatvar']`. This indicated to store the value of the estimate `zhat` and predicted error variance `zhatvar` with each iteration. We can recover these values from `solver.hist_dict`, the history dictionary. We then plot the predicted and actual MSE for each of the two variables. We see that the two match well. Also, the iterations are indexed as "half-iterations" since each iteration takes two passes.
#
# Note that in the multi-column data, the variance at each iteration, `zhatvari`, is a vector with one predicted variance per column. The MLVAMP method can be adjusted to average the variance over all the columns, but this generally leads to poorer or even unstable performance.
# +
# Compute the MSE as a function of the iteration
zhat_hist = solver.hist_dict['zhat']
zvar_hist = solver.hist_dict['zhatvar']
nit = len(zhat_hist)
mse_act = np.zeros((nit,nvar))
mse_pred = np.zeros((nit,nvar))
for ivar in range(nvar):
zpowi = np.mean(np.abs(ztrue[ivar])**2, axis=0)
for it in range(nit):
zhati = zhat_hist[it][ivar]
zhatvari = zvar_hist[it][ivar]
mse_act[it,ivar] = debias_mse(zhati,ztrue[ivar])
mse_pred[it,ivar] = 10*np.log10(np.mean(zhatvari/zpowi))
for ivar in range(nvar):
plt.subplot(1,nvar,ivar+1)
plt.plot(range(nit), mse_act[:,ivar], 'o-', linewidth=2)
plt.plot(range(nit), mse_pred[:,ivar], 's', linewidth=1)
plt.xlabel('Half iteration')
if (ivar == 0):
plt.ylabel('Normalized MSE (dB)')
plt.legend(['Actual', 'Predicted'])
plt.title("z{0:d}".format(ivar))
plt.grid()
# -
| demos/sparse/sparse_probit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="sQq9QvUgFbKB" colab_type="code" colab={}
# !pwd
# + id="t0_EdI479Ykf" colab_type="code" colab={}
from google.colab import drive
drive.mount('/content/drive')
# + id="FVnPxOQo5RdR" colab_type="code" colab={}
import os
os.chdir('/content/drive/Shared drives/brentfromchina/code_warehouse/GPT2-Chinese')
# + id="iilEe93nXYw3" colab_type="code" colab={}
# !ls
# + id="clztyccP9j-d" colab_type="code" colab={}
# !cat train.py
# + id="pMIWEgjJYSw9" colab_type="code" colab={}
pip install -r requirements.txt
# + id="zAgXr8h7X7L4" colab_type="code" colab={}
# !python train.py --raw
# + id="kZwC6b4B-KHZ" colab_type="code" colab={}
# !python generate.py --length=500 --nsamples=4 --prefix=袁隆平 --fast_pattern --save_samples --save_samples_path=demo/
| gpt-2_chinese.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 (tensorflow)
# language: python
# name: tensorflow
# ---
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
# +
def get_data(file_path):
data = np.loadtxt(file_path, delimiter=',', skiprows=1)
x = np.array([np.append(1, row) for row in data[:, :-1]])
y = np.array([1 if label >= 7 else 0 for label in data[:, -1]])
return x, y
def standardize(data):
mean = np.mean(data[:, 1:], axis=0)
std_dev = np.std(data[:, 1:], axis=0)
z = np.array([(row - mean) / std_dev for row in data[:, 1:]])
return np.column_stack((data[:, 0], z))
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def threshold(a):
if a >= 0.5:
return 1
else:
return 0
def cost_function(data, labels, weights):
m = len(labels)
h_x = sigmoid(np.dot(data, weights))
cost = np.dot(-labels, np.log(h_x)) - np.dot((1 - labels), np.log(1 - h_x))
return cost / m
def gradient(data, labels, weights):
m = len(labels)
h_x = sigmoid(np.dot(data, weights))
grads = np.zeros(shape=weights.shape)
for i, grad in enumerate(grads):
grads[i] = np.sum((h_x - labels).dot(data[:, i])) / m
return grads
def BGD(data, labels, learning_rate, epochs):
J = []
thetas = np.zeros(shape=data[0].shape)
for _ in range(epochs):
thetas[:] -= learning_rate * gradient(data=data, labels=labels, weights=thetas)
J.append(cost_function(data=data, labels=labels, weights=thetas))
return J, thetas
def plot_error(error):
sns.set_style(style='darkgrid')
plt.plot(error)
plt.xlabel("Iterations")
plt.ylabel("Error")
plt.title("Cost Function")
plt.show()
def predict(x, y, weights):
count = 0
preds = [threshold(sigmoid(np.dot(row, weights))) for row in x]
for i in range(len(preds)):
if preds[i] == y[i]:
count += 1
return preds, count
def RMSE(predictions, actual):
rmse = np.sum(np.square(predictions - actual)) / len(actual)
return np.sqrt(rmse)
# -
def main():
X, y = get_data(file_path='./winequality-red.csv')
std_X = standardize(data=X)
x_train, x_test, y_train, y_test = train_test_split(std_X, y, test_size=0.2, shuffle=True, random_state=42)
J, thetas = BGD(data=x_train, labels=y_train, learning_rate=0.01, epochs=5000)
plot_error(error=J)
predictions, correct_nums = predict(x=x_test, y=y_test, weights=thetas)
print("Accuracy: {}".format((correct_nums / len(y_test)) * 100))
print("RMSE: {}".format(RMSE(predictions=predictions, actual=y_test)))
tn, fp, fn, tp = confusion_matrix(y_true=y_test, y_pred=predictions, labels=[0, 1]).ravel()
print(f"\nTrue Negatives: {tn}\nTrue Positives: {tp}\nFalse Negative: {fn}\nFalse Positive: {fp}")
sns.heatmap(confusion_matrix(predictions, y_test), robust=True, annot=True)
plt.show()
if __name__ == '__main__':
main()
| red_wine_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + tags=[]
from ppsim import Simulation, StatePlotter, time_trials
from dataclasses import dataclass
import dataclasses
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import pickle
# %matplotlib widget
import ipywidgets as widgets
# -
# # Simplest protocols for the majority problem
#
# The majority problem has a simple 4 state solution, which was analyzed [here](https://arxiv.org/abs/1202.1083) and [here](https://arxiv.org/abs/1404.7671). The rule is always correct, by maintaining the invariant #A - #B.
# + tags=[]
exact_majority = {
('A', 'B'): ('a', 'b'),
('A', 'b'): ('A', 'a'),
('B', 'a'): ('B', 'b')
}
# -
# In the worst case, where the initial gap (#A - #B) is constant, this takes $\Theta(n \log n)$ time to reach the stable correct output configuration.
# + tags=[]
n = 10 ** 5
init_config = {'A': n // 2 + 1, 'B': n // 2}
sim = Simulation(init_config, exact_majority, transition_order='symmetric')
sim.run()
sim.history.plot()
plt.title('4 state majority protocol')
plt.xscale('symlog')
plt.yscale('symlog')
plt.xlim(0, sim.times[-1])
plt.ylim(0, n)
# -
# In the case of a tie, the 4 state protocol does not have well-defined behavior. But by adding two more states, we can correct detect ties as well.
# + tags=[]
# states are A, B, T, a, b, t
def exact_majority_ties(x, y):
# Cancellation
if x == 'A' and y == 'B':
return ('T', 'T')
# Active A / B eliminate T
if x in ['A', 'B'] and y == 'T':
return (x, x.lower())
# Active converts passive
if x.isupper() and y.islower():
return (x, x.lower())
n = 10 ** 5
sim = Simulation({'A': n // 2, 'B': n // 2}, exact_majority_ties, transition_order='symmetric')
print(sim.reactions)
sim.run()
sim.history.plot()
plt.title('6 state majority protocol detecting ties')
plt.xscale('symlog')
plt.yscale('symlog')
plt.xlim(0, sim.times[-1])
plt.ylim(0, n)
# -
# Another simple example is the 3-state approximate majority protocol, which was analyzed [here](http://www.cs.yale.edu/homes/aspnes/papers/approximate-majority-journal.pdf) and [here](https://www.cs.ubc.ca/~condon/papers/approx-maj-journal.pdf).
# + tags=[]
a, b, u = 'A', 'B', 'U'
approximate_majority = {
(a,b): (u,u),
(a,u): (a,a),
(b,u): (b,b)
}
n = 10 ** 9
init_config = {a: n // 2 * 0.5001, b: n // 2 * 0.4999}
sim = Simulation(init_config, approximate_majority)
sim.run(recording_step=0.1)
sim.history.plot()
plt.title('3 state approximate majority protocol')
# -
# It was shown to stabilize in only $O(\log n)$ time to a consensus configuration.
# + tags=[]
ns = [int(n) for n in np.geomspace(10, 10 ** 8, 20)]
def initial_condition(n):
return {'A': n // 2, 'B': n // 2}
df = time_trials(approximate_majority, ns, initial_condition, num_trials=100, max_wallclock_time = 30, transition_order='symmetric')
fig, ax = plt.subplots()
ax = sns.lineplot(x='n', y='time', data=df)
ax.set_title('Average stabilization time of approximate majority')
ax.set_xscale('log')
# -
# This consensus will only be correct with high probability, however, and requires the initial gap to be $\Omega(\sqrt{n \log n})$. We can see that when the gap is close to 0, it is performing essentially a random walk, which is why a sufficiently large initial gap is necessary to ensure the initial majority stays ahead.
# + tags=[]
sim.reset({a: n // 2 + 1, b: n // 2 - 1})
sim.run(4, recording_step = 0.01)
fig, ax = plt.subplots()
ax.set_title('Count of A - count of B')
ax.set_yscale('symlog')
(sim.history['A'] - sim.history['B']).plot()
# -
# # Bias Averaging Framework for $O(\log n)$ state protocols
#
# We view the initial states `A` and `B` as having `bias = +1` and `bias = -1` respectively. We then maintain the invariant that all interactions preserve the total bias.
# To bound the total number of states to $O(\log n)$, the only allowable values for `bias` will be $\pm 1, \pm\frac{1}{2}, \pm\frac{1}{4}, \ldots, \pm\frac{1}{2^L}$ where $L = \lceil \log_2(n) \rceil$.
# We describe the state of the agent with two fields `opinion`$=\pm 1$ and `exponent`$=0,-1, \ldots, -L$, so `bias = opinion * (2 ** exponent)`.
# + tags=[]
from fractions import Fraction
@dataclass(unsafe_hash=True)
class Agent:
opinion: int = 0
exponent: int = 0
@property
def bias(self):
return self.opinion * 2 ** self.exponent
@bias.setter
def bias(self, value):
if value == 0:
self.opinion = self.exponent = 0
else:
self.opinion = int(np.sign(value))
exponent = np.log2(abs(value))
if exponent.is_integer():
self.exponent = int(exponent)
else:
raise ValueError(f'bias = {value} must an integer power of 2')
def __str__(self):
if self.bias == 0:
return '0'
s = ''
if self.bias > 0:
s += '+'
if abs(self.bias) > 1/100:
s += str(Fraction(self.bias))
else:
if self.bias < 0:
s += '-'
s += '1/2^' + str(abs(self.exponent))
return s
def init_agents(a, b):
return {Agent(opinion = 1): a, Agent(opinion = -1): b}
# -
# The cancel / split reactions maintain the invariant sum of agent biases.
# + tags=[]
def cancel_split(a: Agent, b: Agent, L: int):
# cancel reaction
if a.bias == -b.bias:
a.opinion = b.opinion = 0
a.exponent = b.exponent = 0
# split reaction
if a.bias == 0 and abs(b.bias) > 2 ** (-L):
a.opinion = b.opinion
a.exponent = b.exponent = b.exponent - 1
if b.bias == 0 and abs(a.bias) > 2 ** (-L):
b.opinion = a.opinion
b.exponent = a.exponent = a.exponent - 1
print(Simulation(init_agents(1, 1), cancel_split, L = 4).reactions)
# -
# By themselves, however, these rules do not solve majority.
# + tags=[]
n = 10 ** 6
sim = Simulation(init_agents(n // 2 + 1, n // 2), cancel_split, L=int(np.log2(n)))
sp = StatePlotter()
sim.add_snapshot(sp)
sp.ax.set_yscale('symlog')
# + tags=[]
sim.run(recording_step=0.1)
sim.snapshot_slider()
# -
# There are a few additional transitions that will also preserve the bias.
# + tags=[]
from itertools import product
def bias_average(a, b, L):
a, b = dataclasses.replace(a), dataclasses.replace(b)
# all allowable bias values
biases = [0] + [2 ** i for i in range(-L,1)] + [-2 ** i for i in range(-L, 1)]
# all pairs of bias values that preserve the sum
legal_outputs = [(x,y) for (x,y) in product(biases, biases) if x + y == a.bias + b.bias]
# choose the pair of bias values which are closest together
a.bias, b.bias = legal_outputs[np.argmin(np.array([abs(x-y) for (x,y) in legal_outputs]))]
return a, b
print(Simulation(init_agents(1, 1), bias_average, L = 4).reactions)
# -
# But just these transitions do not speed up the protocol or remove the probability of error.
# + tags=[]
n = 10 ** 6
sim = Simulation(init_agents(n // 2 + 1, n // 2), bias_average, L=int(np.log2(n)))
sp = StatePlotter()
sim.add_snapshot(sp)
sp.ax.set_yscale('symlog')
# + tags=[]
sim.run(recording_step=0.1)
sim.snapshot_slider()
# -
# Here was an example simulation run where some minority agents were never eliminated:
# + tags=[]
sim = pickle.load( open( "majority_simulations/bias_average.p", "rb" ) )
sim.snapshot_slider()
# -
# # Adding Synchronization
#
# The unbiased agents will now have a field `hour`, and will wait until `hour = i` before doing a split down to `exponent = -i`.
# They will synchronize their `hour` with separate clock agents who are keeping a timer through a field `minute`, where `hour = minute // m` for a parameter `m` which gives the number of minutes per hour.
# + tags=[]
@dataclass(unsafe_hash=True)
class MajorityAgent(Agent):
role: str = 'main'
_hour: int = 0
minute: int = 0
finished: bool = False
m: int = 5
@property
def hour(self):
if self.role == 'clock':
return self.minute // self.m
else:
return self._hour
@hour.setter
def hour(self, value):
if self.role == 'main':
self._hour = value
# can't change hour for a clock agent
def __str__(self):
if self.bias != 0:
return super().__str__()
if self.role == 'clock':
return 'c' + str(self.minute)
else:
return 'u' + str(self.hour)
def init_majority_agents(a, b, m):
return {MajorityAgent(opinion = 1, m = m): a, MajorityAgent(opinion = -1, m = m): b}
# custom function to build plots that visualize the 3 populations of clock, unbiased, and biased agents
def make_plots(sim):
plt.ioff()
clock_plot = StatePlotter(lambda a: a.minute if a.role == 'clock' else None, update_time = 1)
sim.add_snapshot(clock_plot)
clock_plot.ax.set_xlabel('clock minute')
clock_plot.ax.axes.xaxis.set_ticklabels([])
unbiased_plot = StatePlotter(lambda a: a.hour if a.role == 'main' and a.bias == 0 else None, update_time = 1)
sim.add_snapshot(unbiased_plot)
unbiased_plot.ax.set_xlabel('unbiased hour')
biased_plot = StatePlotter(lambda a: str(a) if a.bias != 0 else None, update_time = 1)
sim.add_snapshot(biased_plot)
for snap in sim.snapshots:
snap.ax.set_yscale('symlog')
snap.fig.tight_layout()
plt.ion()
sim.layout = widgets.GridspecLayout(6,2, height='700px', pane_heights=[4,7,1], grid_gap='5px')
sim.layout[0:2,0] = clock_plot.fig.canvas
sim.layout[0:2,1] = unbiased_plot.fig.canvas
sim.layout[2:5,:] = biased_plot.fig.canvas
sim.layout[5,:] = sim.snapshot_slider()
display(sim.layout)
# -
# The clock agents will count for an additional `L` minutes after the last hour ($O(\log n)$ time). Then they will send a signal `finished = True` that makes all agents stop (and move on to a later phase of the algorithm).
# + tags=[]
def majority(a, b, L):
a.finished = b.finished = a.finished or b.finished
if a.finished:
a.minute = b.minute = 0
a.hour = b.hour = 0
else:
if a.role == b.role == 'main':
# cancel reaction
if a.bias == -b.bias != 0:
a.opinion = b.opinion = 0
a.hour = b.hour = abs(a.exponent)
a.exponent = b.exponent = 0
# half the agents from first split become clock
if a.hour == 0:
a.role = 'clock'
# split reaction
if a.bias == 0 and b.bias != 0 and a.hour > abs(b.exponent):
a.opinion = b.opinion
a.exponent = b.exponent = b.exponent - 1
a.hour = b.hour = 0
if b.bias == 0 and a.bias != 0 and b.hour > abs(a.exponent) :
b.opinion = a.opinion
b.exponent = a.exponent = a.exponent - 1
a.hour = b.hour = 0
# unbiased agents propagate max hour
if a.bias == b.bias == 0:
a.hour = b.hour = min(max(a.hour, b.hour), L)
# clock minute uses new fixed resolution phase clock
if a.role == b.role == 'clock':
# drip reaction
if a.minute == b.minute:
a.minute += 1
# Wait an additional L minutes after hour L before finishing
if a.minute == a.m * L + L:
a.finished = True
# epidemic reaction
else:
a.minute = b.minute = max(a.minute, b.minute)
# + [markdown] tags=[]
# If we set the number of minutes per hour `m` to be $O(\log n)$ then with high probability the entire population will stay synchronized at the same hour. In this case, we have an $O(\log^2 n)$ time majority algorithm, essentially the same as the standard 'canceling and doubling' protocols.
# + tags=[]
n = 10 ** 6
sim = Simulation(init_majority_agents(n // 2 + 1, n // 2, m = int(np.log(n))), majority, L=int(np.log2(n)))
make_plots(sim)
# -
sim.run()
sim.layout[5,:] = sim.snapshot_slider()
# To make the protocol take only $O(\log n)$ time, we set the parameter `m` to be constant. In the case of a tie, we will end up with every biased agent reaching the minimum value `exponent = -L`. Choosing $L = \lceil \log_2(n) \rceil$ ensures that this can only happen in the case of a tie. Thus we can check if all exponents are `-L` after this phase finishes to stably detect a tie.
n = 10 ** 7
sim = Simulation(init_majority_agents(n // 2, n // 2, m = 3), majority, L=int(np.log2(n)))
make_plots(sim)
sim.run()
sim.layout[5,:] = sim.snapshot_slider()
# In the more general case, we will not eliminate all minority agents. What will be true, with high probability, is that a vast majority of agents will finish with the majority opinion, in a range of 3 consecutive exponents.
n = 10 ** 7
sim = Simulation(init_majority_agents(n // 2 + int(n ** 0.5), n // 2 - int(n ** 0.5), m = 3), majority, L=int(np.log2(n)))
sim.run()
make_plots(sim)
sim.run()
sim.layout[5,:] = sim.snapshot_slider()
# +
## For a larger value of n, a simulation was ran and then pickled
# n = 10 ** 10
# sim = Simulation(init_majority_agents(n // 2 + int(n ** 0.5), n // 2 - int(n ** 0.5), m = 3), majority, L=int(np.log2(n)))
# sim.run()
# pickle.dump(sim, open( "majority_simulations/majority.p", "wb" ) )
# We can now load this simulation
sim = pickle.load( open( "majority_simulations/majority.p", "rb" ) )
make_plots(sim)
# -
# # Clock Protocol
# Looking more closely at the rule of the `clock` agents, we can see the key important feature of the `minute` distribution is that the front tail decays doubly-exponentially, while the back tail decays exponentially. This ends up ensuring that when a majority of agents are in `hour = h`, the fraction of agents with `hour > h` can be made to be a fraction that is arbitrarily small by tuning the parameter `m`.
# + tags=[]
def clock(a, b, m):
if a == b < m:
return a + 1, b
else:
return max(a, b), max(a, b)
# + tags=[]
n = 10 ** 9
sim = Simulation({0: n}, clock, m = 30)
sp = StatePlotter()
sim.add_snapshot(sp)
sp.ax.set_yscale('symlog')
# + tags=[]
sim.run(recording_step=0.1)
sim.snapshot_slider()
# -
# Notice also that this clock rule is extremely similar to the power-of-two-choices phase clock. In fact, the distribution of the clock ends up being essentially the same.
# + tags=[]
def two_choices_clock(a, b, m):
if min(a, b) < m:
return min(a, b) + 1, max(a, b)
# + tags=[]
n = 10 ** 9
sim = Simulation({0: n}, two_choices_clock, m = 30)
sp = StatePlotter()
sim.add_snapshot(sp)
sp.ax.set_yscale('symlog')
# + tags=[]
sim.run(recording_step=0.1)
sim.snapshot_slider()
# + tags=[]
| examples/majority.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Call my other functions
import DonorsChooseFunx
import pandas as pd
import datetime
from datetime import date
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# +
today = date.today()
day_of_year = today.timetuple().tm_yday #get day of year
list_o_days=[]
list_o_day_coords=[]
j=0
if (day_of_year+90) <= 364:
for d in range(day_of_year,day_of_year+91):
list_o_days.append(d)
list_o_day_coords.append(DonorsChooseFunx.getxy(d))
else:
for d in range(day_of_year,day_of_year+91):
if d<=364:
list_o_days.append(d)
list_o_day_coords.append(DonorsChooseFunx.getxy(d))
else:
list_o_days.append(j)
list_o_day_coords.append(DonorsChooseFunx.getxy(j))
j += 1
# -
type(day_of_year)
d = datetime.datetime.strptime('{} {}'.format(day_of_year, today.year),'%j %Y')
d
dateDF = pd.DataFrame(list_o_day_coords,columns=['circlx','circly'])
dateDF['dayOFyear']=list_o_days
dateDF.astype({'dayOFyear':'int'}).dtypes
datetime.datetime.strptime('{} {}'.format(dateDF.loc[0,'dayOFyear'], today.year),'%j %Y')
dateDF['calendardate']=dateDF.apply(lambda row: (datetime.datetime.strptime('{} {}'.format(int(row['dayOFyear']), today.year),'%j %Y')),axis=1)
dateDF.iloc[0,1]
from colour import Color
red = Color("red")
colors = list(red.range_to(Color("green"),365))
m=1
x,y=DonorsChooseFunx.getxy(m)
str(colors[j])
plt.rcParams["figure.figsize"] = (12,12)
j=0
for m in range(0,364):
x,y=DonorsChooseFunx.getxy(m)
plt.scatter(x,y, color=str(colors[j]),alpha=0.5);
j+=1
daytuple=DonorsChooseFunx.getxy(day_of_year)
daytuple
cx, cy = daytuple
cx
state_you_live = "AK"
scst=['school_state_AK',
'school_state_AL',
'school_state_AR',
'school_state_AZ',
'school_state_CA',
'school_state_CO',
'school_state_CT',
'school_state_DC',
'school_state_DE',
'school_state_FL',
'school_state_GA',
'school_state_HI',
'school_state_IA',
'school_state_ID',
'school_state_IL',
'school_state_IN',
'school_state_KS',
'school_state_KY',
'school_state_LA',
'school_state_MA',
'school_state_MD',
'school_state_ME',
'school_state_MI',
'school_state_MN',
'school_state_MO',
'school_state_MS',
'school_state_MT',
'school_state_NC',
'school_state_ND',
'school_state_NE',
'school_state_NH',
'school_state_NJ',
'school_state_NM',
'school_state_NV',
'school_state_NY',
'school_state_OH',
'school_state_OK',
'school_state_OR',
'school_state_PA',
'school_state_RI',
'school_state_SC',
'school_state_SD',
'school_state_TN',
'school_state_TX',
'school_state_UT',
'school_state_VA',
'school_state_VT',
'school_state_WA',
'school_state_WI',
'school_state_WV',
'school_state_WY']
school_state_AK = 0
school_state_AL = 0
school_state_AR = 0
school_state_AZ = 0
school_state_CA = 0
school_state_CO = 0
school_state_CT = 0
school_state_DC = 0
school_state_DE = 0
school_state_FL = 0
school_state_GA = 0
school_state_HI = 0
school_state_IA = 0
school_state_ID = 0
school_state_IL = 0
school_state_IN = 0
school_state_KS = 0
school_state_KY = 0
school_state_LA = 0
school_state_MA = 0
school_state_MD = 0
school_state_ME = 0
school_state_MI = 0
school_state_MN = 0
school_state_MO = 0
school_state_MS = 0
school_state_MT = 0
school_state_NC = 0
school_state_ND = 0
school_state_NE = 0
school_state_NH = 0
school_state_NJ = 0
school_state_NM = 0
school_state_NV = 0
school_state_NY = 0
school_state_OH = 0
school_state_OK = 0
school_state_OR = 0
school_state_PA = 0
school_state_RI = 0
school_state_SC = 0
school_state_SD = 0
school_state_TN = 0
school_state_TX = 0
school_state_UT = 0
school_state_VA = 0
school_state_VT = 0
school_state_WA = 0
school_state_WI = 0
school_state_WV = 0
school_state_WY = 0
for j in scst:
if state_you_live in j:
import numpy as np
for st in scst:
if state_you_live in st:
st = 1
else:
st = 0
# +
school_state = dict()
for s in scst:
if state_you_live in s:
school_state[s] = 1
else:
school_state[s] = 0
# +
school_state
# -
valuearray=np.array([[students_reached,school_state,total_price_excluding_optional_support]])
valuearray
startingvals = [6,7,8]
endingvals = [9,10,11]
# +
for value in school_state:
startingvals.append(value)
startingvals.append(endingvals)
# -
| testscripts/.ipynb_checkpoints/Untitled11-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp benchmarking
# -
# # benchmarking
#
# > This module contains new evaluation protocol for UBC Phototour local patch dataset
#hide
from nbdev.showdoc import *
# +
#export
import numpy as np
import gc
import os
from fastprogress.fastprogress import progress_bar
from scipy.spatial.distance import cdist, hamming
from sklearn.metrics.pairwise import paired_distances
from sklearn.metrics import average_precision_score
def evaluate_mAP_snn_based(descriptors:np.array,
labels:np.array,
img_labels:np.array,
path_to_save_mAP: str,
backend:str ='numpy', distance:str ='euclidean'):
'''Function to calculate mean average precision, over per-image based matching using Lowe SNN ratio.'''
if os.path.isfile(path_to_save_mAP):
print (f"Found saved results {path_to_save_mAP}, loading")
res = np.load(path_to_save_mAP)
return res
backends = ['numpy', 'pytorch-cuda']
if backend not in backends:
raise ValueError(f'backend {backend} should one of {backends}')
possible_distances = ['euclidean', 'hamming']
if distance == 'euclidean':
p=2
elif distance == 'hamming':
p=0
else:
raise ValueError(f'distance {distance} should one of {possible_distances}')
APs = []
unique_img_labels = sorted(np.unique(img_labels))
for img_idx in progress_bar(unique_img_labels):
current_batch = img_labels == img_idx
cur_descs = descriptors[current_batch]
if backend == 'pytorch-cuda':
import torch
dev = torch.device('cpu')
try:
if torch.cuda.is_available():
dev = torch.device('cuda')
except:
dev = torch.device('cpu')
cur_descs = torch.from_numpy(cur_descs).to(dev).float()
cur_labels = labels[current_batch]
NN = cur_labels.shape[0]
pos_labels_repeat = np.broadcast_to(cur_labels.reshape(1,-1),(NN,NN))
pos_mask = (pos_labels_repeat == pos_labels_repeat.T)
pos_mask_not_anchor = pos_mask != np.eye(NN, dtype=np.bool)
neg_idx = np.zeros((NN), dtype=np.int32)
if NN > 1000: # To avoid OOM, we will find hard negative in batches
bs1 = 128
nb = (NN // bs1)
for i in range(nb):
st = i*bs1
fin = min(NN, (i+1)*bs1)
if fin == st:
break
if backend == 'pytorch-cuda':
dm = torch.cdist(cur_descs[st:fin], cur_descs, p=p) +\
1000.0 * torch.from_numpy(pos_mask[st:fin]).to(device=dev, dtype=cur_descs.dtype) + \
1000.0 * torch.eye(NN, device=dev, dtype=torch.bool)[st:fin].float()
min_neg_idxs = torch.min(dm, axis=1)[1].cpu().numpy()
else:
dm = cdist(cur_descs[st:fin], cur_descs, metric=distance) +\
1000.0 * pos_mask[st:fin] + \
1000.0 * np.eye(NN, dtype=np.bool)[st:fin]
min_neg_idxs = np.argmin(dm, axis=1)
neg_idx[st:fin] = min_neg_idxs
# We want to create all possible anchor-positive combinations
pos_idxs = np.broadcast_to(np.arange(NN).reshape(1,-1),(NN,NN))[pos_mask_not_anchor]
anc_idxs = np.nonzero(pos_mask_not_anchor)[0]
pos_mask = None
neg_idxs = neg_idx[anc_idxs]
if backend == 'pytorch-cuda':
pos_dists = torch.nn.functional.pairwise_distance(cur_descs[anc_idxs], cur_descs[pos_idxs], p=p).detach().cpu().numpy()
neg_dists = torch.nn.functional.pairwise_distance(cur_descs[anc_idxs], cur_descs[neg_idxs], p=2).detach().cpu().numpy()
else:
if distance == 'hamming':
pos_dists = paired_distances(cur_descs[anc_idxs], cur_descs[pos_idxs], metric=hamming)
neg_dists = paired_distances(cur_descs[anc_idxs], cur_descs[neg_idxs], metric=hamming)
else:
pos_dists = paired_distances(cur_descs[anc_idxs], cur_descs[pos_idxs], metric=distance)
neg_dists = paired_distances(cur_descs[anc_idxs], cur_descs[neg_idxs], metric=distance)
correct = pos_dists <= neg_dists
snn = np.minimum(pos_dists,neg_dists) / np.maximum(pos_dists,neg_dists)
snn[np.isnan(snn)] = 1.0
ap = average_precision_score(correct, 1-snn)
APs.append(ap)
pos_mask = None
pos_mask_not_anchor = None
cur_descs = None
pos_labels_repeat = None
dm = None
gc.collect()
res = np.array(APs).mean()
if not os.path.isdir(os.path.dirname(path_to_save_mAP)):
os.makedirs(os.path.dirname(path_to_save_mAP))
np.save(path_to_save_mAP, res)
return res
# +
#export
from brown_phototour_revisited.extraction import *
from collections import defaultdict
def load_cached_results(desc_name: str,
learned_on: list = ['3rdparty'],
path_to_save_dataset:str = './dataset/',
path_to_save_descriptors: str = './descriptors/',
path_to_save_mAP: str = './mAP/',
patch_size: int = 32):
'''Function, which checks, if the descriptor was already evaluated, and if yes - loads it'''
subsets = ['liberty', 'notredame', 'yosemite']
results = defaultdict(dict)
for train_ds in learned_on:
for subset in subsets:
if train_ds == '3rdparty':
load_path = f'{path_to_save_mAP}/{desc_name}_PS{patch_size}_3rdparty_{subset}.npy'
else:
load_path = f'{path_to_save_mAP}/{desc_name}_PS{patch_size}_learned{train_ds}_{subset}.npy'
if os.path.isfile(load_path):
print (f"Found saved results {load_path}, loading")
mAP = np.load(load_path)
results[train_ds][subset] = mAP
print (f'{desc_name} trained on {learned_on} PS = {patch_size} mAP on {subset} = {mAP:.5f}')
return results
# +
#export
from brown_phototour_revisited.extraction import *
from collections import defaultdict
def full_evaluation(models,
desc_name: str,
path_to_save_dataset:str = './dataset/',
path_to_save_descriptors: str = './descriptors/',
path_to_save_mAP: str = './mAP/',
patch_size: int = 32,
device: str = 'cpu',
backend='numpy',
distance='euclidean'):
'''Function, which performs descriptor extraction and evaluation on all datasets.
models can be either torch.nn.Module or dict with keys ['liberty', 'notredame', 'yosemite'],
denoting datasets, each model was trained on resp.'''
subsets = ['liberty', 'notredame', 'yosemite']
if type(models) is dict:
results = load_cached_results(desc_name,
[x for x in models.keys()],
path_to_save_dataset,
path_to_save_descriptors,
path_to_save_mAP,
patch_size)
for learned_on, model in models.items():
for subset in subsets:
if subset == learned_on:
continue
if learned_on in results:
if subset in results:
continue
try:
desc_dict = extract_pytorchinput_descriptors(model,
desc_name + '_' + learned_on,
subset = subset,
path_to_save_dataset = path_to_save_dataset,
path_to_save_descriptors = path_to_save_descriptors,
patch_size = patch_size,
device = device)
except:
desc_dict = extract_numpyinput_descriptors(model,
desc_name + '_' + learned_on,
subset= subset,
path_to_save_dataset = path_to_save_dataset,
path_to_save_descriptors = path_to_save_descriptors,
patch_size = patch_size)
mAP = evaluate_mAP_snn_based(desc_dict['descriptors'],
desc_dict['labels'],
desc_dict['img_idxs'],
path_to_save_mAP=f'{path_to_save_mAP}/{desc_name}_PS{patch_size}_learned{learned_on}_{subset}.npy',
backend=backend,
distance=distance)
results[learned_on][subset] = mAP
print (f'{desc_name} trained on {learned_on} PS = {patch_size} mAP on {subset} = {mAP:.5f}')
else:
model = models
results = load_cached_results(desc_name,
['3rdparty'],
path_to_save_dataset,
path_to_save_descriptors,
path_to_save_mAP,
patch_size)
for subset in subsets:
if '3rdparty' in results:
if subset in results['3rdparty']:
continue
try:
desc_dict = extract_pytorchinput_descriptors(model,
desc_name + '_3rdparty' ,
subset= subset,
path_to_save_dataset = path_to_save_dataset,
path_to_save_descriptors = path_to_save_descriptors,
patch_size = patch_size,
device = device)
except:
desc_dict = extract_numpyinput_descriptors(model,
desc_name + '_3rdparty' ,
subset= subset,
path_to_save_dataset = path_to_save_dataset,
path_to_save_descriptors = path_to_save_descriptors,
patch_size = patch_size)
mAP = evaluate_mAP_snn_based(desc_dict['descriptors'],
desc_dict['labels'],
desc_dict['img_idxs'],
path_to_save_mAP=f'{path_to_save_mAP}/{desc_name}_PS{patch_size}_3rdparty_{subset}.npy',
backend=backend,
distance=distance)
results['3rdparty'][subset] = mAP
print (f'{desc_name} trained on 3rdparty PS = {patch_size} mAP on {subset} = {mAP:.5f}')
return results
# -
# +
#export
from typing import Dict
def nice_results_3rdparty(desc_name:str, res_dict:Dict):
'''Returns formatted string with results'''
if 'liberty' in res_dict:
lib = f'{(100*res_dict["liberty"]):.2f}'
else:
lib = '-----'
if 'notredame' in res_dict:
notre = f'{(100*res_dict["notredame"]):.2f}'
else:
notre = '-----'
if 'yosemite' in res_dict:
yos = f'{(100*res_dict["yosemite"]):.2f}'
else:
yos = '-----'
res = f'{desc_name[:20].ljust(20)} {yos} {notre} {lib} '
return res
def nice_results_Brown(desc_name:str, res_dict:Dict) -> str:
'''Returns formatted string with results'''
NA = '-----'
lib_yos, lib_notre, yos_notre, yos_lib, notre_lib, notre_yos = NA,NA,NA,NA,NA,NA
if 'liberty' in res_dict:
cr = res_dict['liberty']
if 'notredame' in cr:
lib_notre = f'{(100*cr["notredame"]):.2f}'
else:
lib_notre = NA
if 'yosemite' in cr:
lib_yos = f'{(100*cr["yosemite"]):.2f}'
else:
lib_yos = NA
if 'notredame' in res_dict:
cr = res_dict['notredame']
if 'liberty' in cr:
notre_lib = f'{(100*cr["liberty"]):.2f}'
else:
notre_lib = NA
if 'yosemite' in cr:
notre_yos = f'{(100*cr["yosemite"]):.2f}'
else:
notre_yos = NA
if 'yosemite' in res_dict:
cr = res_dict['yosemite']
if 'liberty' in cr:
yos_lib = f'{(100*cr["liberty"]):.2f}'
else:
yos_lib = NA
if 'notredame' in cr:
yos_notre = f'{(100*cr["notredame"]):.2f}'
else:
yos_notre = NA
res = f'{desc_name[:20].ljust(18)} {lib_yos} {notre_yos} {lib_notre} {yos_notre} {notre_lib} {yos_lib}'
return res
def print_results_table(full_res_dict: Dict):
'''Function, which prints nicely formatted table with all results'''
TITLE00 = 'Mean Average Precision wrt Lowe SNN ratio criterion on UBC Phototour Revisited'
sep = '------------------------------------------------------------------------------'
TITLE1 = 'trained on liberty notredame liberty yosemite notredame yosemite'
TITLE2 = 'tested on yosemite notredame liberty'
print (sep)
print (TITLE00)
print (sep)
print (TITLE1)
print (TITLE2)
print (sep)
for desc_name, desc_results in full_res_dict.items():
if '3rdparty' in desc_results:
if len(desc_results['3rdparty']) == 3:
print (nice_results_3rdparty(desc_name, desc_results['3rdparty']))
else:
print (nice_results_Brown(desc_name, desc_results))
else:
print (nice_results_Brown(desc_name, desc_results))
print (sep)
return
# -
# Some visualization
res = {'Kornia RootSIFT 32px':
{'3rdparty': {'liberty': 0.49652328,
'notredame': 0.49066364,
'yosemite': 0.58237198}},
'OpenCV_LATCH 65px':
{'yosemite': {'liberty': 0.39075459,
'notredame': 0.37258606}}}
print_results_table(res)
| nbs/benchmarking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from brainlit.utils import read_swc, df_to_graph, graph_to_paths
from brainlit.viz.visualize import napari_viewer
import numpy as np
from skimage import io
from scipy.ndimage.morphology import distance_transform_edt
from pathlib import Path
from brainlit.algorithms.image_processing import Bresenham3D
from brainlit.utils.benchmarking_params import brain_offsets, vol_offsets, scales, type_to_date
# +
# loading all the benchmarking images from local paths
# all the paths of gfp images are saved in variable gfp_files
# the folder of output masks is in the same folder where the folder of benchmarking data is
base_dir = Path("D:/Study/Nuero Data Design/brainlit")
data_dir = base_dir / "benchmarking_datasets"
im_dir = data_dir / "Images"
mask_dir = base_dir / "benchmarking_masks"
gfp_files = list(im_dir.glob("**/*.tif"))
swc_base_path = data_dir / "Manual-GT"
save = True
for im_num, im_path in enumerate(gfp_files):
# loading one gfp image
print(str(im_path))
im = io.imread(im_path, plugin="tifffile")
im = np.swapaxes(im, 0, 2)
file_name = im_path.parts[-1][:-8]
f = im_path.parts[-1][:-8].split("_")
image = f[0]
date = type_to_date[image]
num = int(f[1])
scale = scales[date]
brain_offset = brain_offsets[date]
vol_offset = vol_offsets[date][num]
im_offset = np.add(brain_offset, vol_offset)
# loading all the .swc files corresponding to the image
# all the paths of .swc files are saved in variable swc_files
lower = int(np.floor((num - 1) / 5) * 5 + 1)
upper = int(np.floor((num - 1) / 5) * 5 + 5)
dir1 = date + "_" + image + "_" + str(lower) + "-" + str(upper)
dir2 = date + "_" + image + "_" + str(num)
swc_path = swc_base_path / dir1 / dir2
swc_files = list(swc_path.glob("**/*.swc"))
paths_total = []
labels_total = np.zeros(im.shape)
# generate paths and save them into paths_total
for swc_num, swc in enumerate(swc_files):
if "cube" in swc.parts[-1]:
# skip the bounding box swc
continue
print(swc)
df, swc_offset, _, _, _ = read_swc(swc)
offset_diff = np.subtract(swc_offset, im_offset)
G = df_to_graph(df)
paths = graph_to_paths(G)
# for every path in that swc
for path_num, p in enumerate(paths):
pvox = (p + offset_diff) / (scale) * 1000
paths_total.append(pvox)
# generate labels by using paths
for path_voxel in paths_total:
for voxel_num, voxel in enumerate(path_voxel):
if voxel_num == 0:
continue
voxel_prev = path_voxel[voxel_num-1,:]
xs,ys,zs = Bresenham3D(int(voxel_prev[0]), int(voxel_prev[1]), int(voxel_prev[2]),int(voxel[0]), int(voxel[1]), int(voxel[2]))
for x,y,z in zip(xs,ys,zs):
vox = np.array((x,y,z))
if (vox >= 0).all() and (vox < im.shape).all():
labels_total[x,y,z] = 1
label_flipped = labels_total*0
label_flipped[labels_total==0] = 1
dists = distance_transform_edt(label_flipped, sampling = scale)
labels_total[dists <= 1000] = 1
if save:
im_file_name = file_name + "_mask.tif"
out_file = mask_dir / im_file_name
io.imsave(out_file, labels_total, plugin="tifffile")
# +
# checking whether masks can be loaded
show_napari = False
mask_files = list(mask_dir.glob("**/*.tif"))
for im_num, im_path in enumerate(gfp_files):
im = io.imread(im_path, plugin="tifffile")
im = np.swapaxes(im, 0, 2)
file_name = im_path.parts[-1][:-8]
mask_file = file_name + "_mask.tif"
mask_path = mask_dir / mask_file
mask = io.imread(mask_path, plugin="tifffile")
print("loading the mask of", file_name, "...")
if show_napari:
napari_viewer(im, labels=mask, label_name="mask")
# -
| experiments/ffn/generating_image_masks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # seaborn package
#
# This notebook is based on https://qiita.com/hik0107/items/3dc541158fceb3156ee0
#
# **seaborn** is a python package for data viaualization. It is a wrapper of **matplotlib**. matplotlib has many features, but it requires many parameter to use. seaborn is included in Anaconda.
# +
# setup
import numpy as np
import pandas as pd
import seaborn as sns
x = np.random.normal(size=100) #create random data of numpy array
titanic = sns.load_dataset("titanic") ## Titanic servivor's data from kaggle
tips = sns.load_dataset("tips") ##
iris = sns.load_dataset("iris") ##
# +
# seaborn use Pandas dataframe
print('data type is:', type(tips))
tips.head()
# +
# Histgram. change arguments and see what happens
# sns.distplot(x, kde=False, rug=False, bins=10)
sns.distplot(x, kde=True, rug=False, bins=20)
# +
# Scatter plot
sns.jointplot('sepal_width', 'petal_length', data=iris)
# +
# Scatter plot of variable combination
# sns.pairplot(iris)
sns.pairplot(iris, hue='species')
# -
features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
sns.heatmap(iris[features].corr(), center=0, cmap='coolwarm', annot=True)
# +
# Distribution (stripplot)
sns.stripplot(x="day", y="total_bill", data=tips)
# +
# separate lunch and dinner
sns.stripplot(x="day", y="total_bill", data=tips, hue='time')
# -
sns.boxplot(x="size", y="tip", data=tips.sort_values('size'))
# ### Color palette management
#
# Please refer http://seaborn.pydata.org/tutorial/color_palettes.html
# +
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
sns.palplot(sns.color_palette(flatui))
sns.set_palette(flatui)
# +
# Histgram (barplot)
# survivors ratio
sns.barplot(x='sex', y='survived', data=titanic, hue='class')
# +
# total survivors
titanic_grpby = titanic.groupby( ['sex', 'class'])
titanic_data_for_graph = titanic_grpby['survived'].aggregate(sum).reset_index()
sns.barplot(x='sex', y='survived', hue= 'class', data=titanic_data_for_graph)
# +
# count number of data (countplot)
sns.countplot(x='sex', hue='embarked', data=titanic, palette='Greens_d')
# -
| seaborn.ipynb |
% -*- coding: utf-8 -*-
% ---
% jupyter:
% jupytext:
% text_representation:
% extension: .m
% format_name: light
% format_version: '1.5'
% jupytext_version: 1.14.4
% kernelspec:
% display_name: Octave
% language: octave
% name: octave
% ---
% + [markdown] slideshow={"slide_type": "slide"}
% ## Welcome to Interactive MATLAB
%
% Hit the space bar, swipe up, or click the down arrow in the bottom right to continue.
% <center>
% Go to: <a href="https://tinyurl.com/r9923cg"> tinyurl.com/r9923cg </a>
% </center>
% <p style="color: gray; padding-top: 0.25cm;text-align: center;">▶️Press the spacebar to continue</p>
% + [markdown] slideshow={"slide_type": "subslide"}
% ### Navigation
% * This is a 'subslide'
% * You can navigate the slide/subslide hierarchy using the arrow buttons in the bottom right corner
% + [markdown] slideshow={"slide_type": "slide"}
% ## Running Code
% * Run code by selecting the following cell and hitting <font color="blue">shift-enter</font> at the same time.
% * The number at the left tells you the execution order of code
% * Code cells can be run in any order
% + slideshow={"slide_type": "-"}
disp("Hello world!")
% + [markdown] slideshow={"slide_type": "subslide"}
% ### These slides are interactive!
% All code is live and editable; try changing the text below!
% -
disp("Hello world!")
% + [markdown] slideshow={"slide_type": "slide"}
% # Table of Contents
%
% * [Functions](Functions.ipynb)
% * [Control Flow](Control%20Flow.ipynb)
| Welcome.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.2
# language: julia
# name: julia-0.6
# ---
using Revise
#addprocs(4)
using Walk1DMDP, CMDPs
using POMDPs, POMDPToolbox, MCTS
using DataFrames
using Plots; gr()
mdp = Walk1D()
b = DPWBandit(mdp)
solver = ModularSolver(mdp, b; n_iterations=1200)
policy = solve(solver, mdp);
# + active=""
# #s0 = initial_state(mdp, Base.GLOBAL_RNG)
# #a, info = action_info(policy, s0)
# #best_path = info[:best_path]
# #policy = BPTrackerPolicy(best_path)
# -
s0 = initial_state(mdp, Base.GLOBAL_RNG)
hr = HistoryRecorder(; rng=Base.GLOBAL_RNG)
h = simulate(hr, mdp, policy, s0);
plot(mdp, h)
sum(h.reward_hist)
# ## Observer
mdp = Walk1D()
observer = AQObserver(1)
b = DPWBandit(mdp; exploration_constant=20.0, k_action=0.5, alpha_action=0.5, observer=observer)
solver = ModularSolver(mdp, b; n_iterations=1200)
policy = solve(solver, mdp);
s0 = initial_state(mdp, Base.GLOBAL_RNG)
a, info = action_info(policy, s0)
plot(observer, 1200)
animate(observer; fps=4, ylim=(-45.0,-12.0))
observer.Xs[10]
observer.ys[502]
observer.ys[501]
observer.ns[502]
| notebooks/DPW_Walk1D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
# <a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="max-width: 250px; display: inline" alt="Wikistat"/></a>
# <a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" style="float:right; max-width: 200px; display: inline" alt="IMT"/> </a>
# </center>
# # IA Framework.
# ## Lab 1 - Introduction to Pyspark.
# #### Part 4 IntroductionTo [SparkML](https://spark.apache.org/docs/latest/ml-guide.html) library (or *MLlib DataFrame-based API*) <a href="http://spark.apache.org/"><img src="http://spark.apache.org/images/spark-logo-trademark.png" style="max-width: 100px; display: inline" alt="Spark"/> </a>
# ## Introduction
#
# Since Spark 2.0 MlLib library which used only RDD is note developped any more. If it still can be used, no more functionnality will be added.
#
# The main *machine learning* Spark library is now `SparkML`. `SparkML` use only *DataFrame*.
#
# `SparkML` does not has as much functionality as `MlLib` today, but it will reach total compatibility in Spark 3.0
# ## Context
from pyspark import SparkContext
from pyspark.sql import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession.builder \
.master("local") \
.appName("cal4 pyspark") \
.getOrCreate()
# ## Elementary statistics
#
# Most of function used in *MLlib* notebook, does not exist on *SparkML* only correlation function and hypothesis testing are available so far.
# ### Vectors object
#
# The *SparkML* library used `Vectors` object to manipulate array (similar to numpy).
from numpy import array
np_vectors=array([1.0,0.0,2.0,4.0,0.0])
np_vectors
from pyspark.ml.linalg import Vectors
denseVec2=Vectors.dense([1.0,0.0,2.0,4.0,0.0])
denseVec2
# The code above build *DenseVector*. *SparseVector* can be used for sparse object.
sparseVec1 = Vectors.sparse(10, {0: 1.0, 2: 2.0, 6: 4.0})
sparseVec1
# Another syntax
sparseVec2 = Vectors.sparse(10, [0, 2, 6], [1.0, 2.0, 4.0])
sparseVec2
# ### Correlation
#
# The `pyspark.ml.stat.Correlation` function enables to compute correlations ( *Pearson* et *Spearman*) between columns of a *DataFrame*.
# +
from pyspark.ml.linalg import Vectors
from pyspark.ml.stat import Correlation
data = [(Vectors.sparse(4, [(0, 1.0), (3, -2.0)]),),
(Vectors.dense([4.0, 5.0, 0.0, 3.0]),),
(Vectors.dense([6.0, 7.0, 0.0, 8.0]),),
(Vectors.sparse(4, [(0, 9.0), (3, 1.0)]),)]
df = spark.createDataFrame(data, ["features"])
r1 = Correlation.corr(df, "features").head()
print("Pearson correlation matrix:\n" + str(r1[0]))
r2 = Correlation.corr(df, "features", "spearman").head()
print("Spearman correlation matrix:\n" + str(r2[0]))
# -
# ### Summary Statistics
r2
# ### Hypothesis testing
# +
from pyspark.ml.linalg import Vectors
from pyspark.ml.stat import ChiSquareTest
data = [(0.0, Vectors.dense(0.5, 10.0)),
(0.0, Vectors.dense(1.5, 20.0)),
(1.0, Vectors.dense(1.5, 30.0)),
(0.0, Vectors.dense(3.5, 30.0)),
(0.0, Vectors.dense(3.5, 40.0)),
(1.0, Vectors.dense(3.5, 40.0))]
df = spark.createDataFrame(data, ["label", "features"])
r = ChiSquareTest.test(df, "features", "label").head()
print("pValues: " + str(r.pValues))
print("degreesOfFreedom: " + str(r.degreesOfFreedom))
print("statistics: " + str(r.statistics))
# -
# ## ML Pipeline
# *SparkML*, is based on the notion of **ML Pipeline**.
#
# A **ML Pipeline** allows to combine different steps of the ML process from data cleaning and processing to learning trough object call *pipeline* or *workflow*.
# ### Estimator, Transformer, and Param
#
# A **ML Pipeline** is build from three type of objects:
#
#
# * **Transformer**: is a function which will convert a *DataFrame* to another *DataFrame*. In most case, the *DataFrame* created is similar to the old one with a new columns . **Transformer** example:
# * The `predict` function of a ML model is a **Transformer**. It will take a DataFrame as an input and return a new dataframe with the prediction column.
# * A *StringIndexer* column will take a DataFrame with text inside and will return new dataframe with text converted as numerical value.
#
# * **Estimator**: is an algorithm, apply on a *DataFrame* in order to build **Transformer**. E**Estimator** example:
# * A learning algorithm is an **Estimator**. Once it has been applied on a *DataFrame* it will build a **Transformer** able to perform prediction
#
# * **Parameter**: An API shared by **Transformer** and **Estimators** to specified parameters.
# #### Example : Logistic Regression
# +
from pyspark.ml.linalg import Vectors
from pyspark.ml.classification import LogisticRegression
training = spark.createDataFrame([
(1.0, Vectors.dense([0.0, 1.1, 0.1])),
(0.0, Vectors.dense([2.0, 1.0, -1.0])),
(0.0, Vectors.dense([2.0, 1.3, 1.0])),
(1.0, Vectors.dense([0.0, 1.2, -0.5]))], ["label", "features"])
test = spark.createDataFrame([
(1.0, Vectors.dense([-1.0, 1.5, 1.3])),
(0.0, Vectors.dense([3.0, 2.0, -0.1])),
(1.0, Vectors.dense([0.0, 2.2, -1.5]))], ["label", "features"])
# -
# We create a `LogisticRegression` **Estimator**: `lr`.
#
lr = LogisticRegression(maxIter=10, regParam=0.01, featuresCol='features', labelCol='label', predictionCol='prediction', probabilityCol='probability')
lr.explainParams()
# We apply the estimator on the train *DataFrame*. It produces a **Transformer**.
model = lr.fit(training)
# We then apply the **Transformer** on the test `DataFrame` .
prediction = model.transform(test)
# The result is a new *Dataframe* : *prediction* which is the test *Dataframe* with two new columns : prediction and probability.
prediction
# Those names has been specified within the definition of `lr` **Estimator** used to build `model` **Transformer**
# results
result = prediction.select("features", "label", "probability", "prediction").collect()
for row in result:
print("features=%s, label=%s -> prob=%s, prediction=%s"
% (row.features, row.label, row.probability, row.prediction))
# ### Pipeline
# A **Pipeline** is an association of differents **Transformers** and **Estimators** in order to specify a complete ML pipeline.
#
# To apply text classification we will apply these successive steps:
#
# * Convert text in list of word (Tokenizer)
# * Convert list of word into numerical value (Hashing TF)
# * Training
# * Prediction
#
# All these step will be resume in a single object **Pipeline**.
#
# **NB** Tokenizer and Hashing TF will be used here without explanation. They will be studied in details in the third lab of AIF module.
# #### Example : Tokenize, Hash and logistic regression.
# +
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import HashingTF, Tokenizer
# DataFrame d'Apprentissage
training = spark.createDataFrame([
(0, "a b c d e spark", 1.0),
(1, "b d", 0.0),
(2, "spark f g h", 1.0),
(3, "hadoop mapreduce", 0.0)
], ["id", "text", "label"])
# DataFrame Test.
test = spark.createDataFrame([
(4, "spark i j k"),
(5, "l m n"),
(6, "spark hadoop spark"),
(7, "apache hadoop")
], ["id", "text"])
# -
# *Tokenizer* is an **Estimator** which allows to build a tokenizer **Transformer** object.
tokenizer = Tokenizer(inputCol="text", outputCol="words")
df_tokenized = tokenizer.transform(training)
df_tokenized.select("words").take(4)
# *Hashing TF* is an **Estimator** hashing TF **Transformer** object.
hashingTF = HashingTF(inputCol="words", outputCol="features")
df_hash= hashingTF.transform(df_tokenized)
df_hash.select("features").take(4)
# The step aboves can be combine in a Pipeline along with the training step.
tokenizer = Tokenizer(inputCol="text", outputCol="words")
hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol="features")
lr = LogisticRegression(maxIter=10, regParam=0.001)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
# Apply all steps on the training dataframe.
model = pipeline.fit(training)
# Prediction
prediction = model.transform(test)
selected = prediction.select("id", "text", "probability", "prediction")
for row in selected.collect():
rid, text, prob, prediction = row
print("(%d, %s) --> prob=%s, prediction=%f" % (rid, text, str(prob), prediction))
# **Exercise** As in the second notebook, use SparkML library to build a ML model in order to predict attack on the [KDD Cup 1999](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) dataset.
#
#
# Try to use various **Transformer** within the pipeline you will build : https://spark.apache.org/docs/latest/ml-features.html (One-Hot encoder to user string variable, ACP to reduce number of features, Different methods to scale the data, etc..)
| PySpark/Cal4-PySpark-Statelem&Pipeline-SparkML.ipynb |