code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Practice Notebook: Reading and Writing Files
# In this exercise, we will test your knowledge of reading and writing files by playing around with some text files.
# <br><br>
# Let's say we have a text file containing current visitors at a hotel. We'll call it, *guests.txt*. Run the following code to create the file. The file will automatically populate with each initial guest's first name on its own line.
# +
guests = open("guests.txt", "w")
initial_guests = ["Bob", "Andrea", "Manuel", "Polly", "Khalid"]
for i in initial_guests:
guests.write(i + "\n")
guests.close()
# -
# No output is generated for the above code cell. To check the contents of the newly created *guests.txt* file, run the following code.
with open("guests.txt") as guests:
for line in guests:
print(line)
# The output shows that our *guests.txt* file is correctly populated with each initial guest's first name on its own line. Cool!
# <br><br>
# Now suppose we want to update our file as guests check in and out. Fill in the missing code in the following cell to add guests to the *guests.txt* file as they check in.
# +
new_guests = ["Sam", "Danielle", "Jacob"]
with open("guests.txt", 'a') as guests:
for i in new_guests:
guests.write(i + "\n")
guests.close()
# -
# To check whether your code correctly added the new guests to the *guests.txt* file, run the following cell.
with open("guests.txt") as guests:
for line in guests:
print(line)
# The current names in the *guests.txt* file should be: Bob, Andrea, Manuel, Polly, Khalid, Sam, Danielle and Jacob.
# <br><br>
# Was the *guests.txt* file correctly appended with the new guests? If not, go back and edit your code making sure to fill in the gaps appropriately so that the new guests are correctly added to the *guests.txt* file. Once the new guests are successfully added, you have filled in the missing code correctly. Great!
# <br><br>
# Now let's remove the guests that have checked out already. There are several ways to do this, however, the method we will choose for this exercise is outlined as follows:
# 1. Open the file in "read" mode.
# 2. Iterate over each line in the file and put each guest's name into a Python list.
# 3. Open the file once again in "write" mode.
# 4. Add each guest's name in the Python list to the file one by one.
#
# <br>
# Ready? Fill in the missing code in the following cell to remove the guests that have checked out already.
# +
checked_out=["Andrea", "Manuel", "Khalid"]
temp_list=[]
with open("guests.txt", 'r') as guests:
for g in guests:
temp_list.append(g.strip())
with open("guests.txt", 'w') as guests:
for name in temp_list:
if name not in checked_out:
guests.write(name + "\n")
# -
# To check whether your code correctly removed the checked out guests from the *guests.txt* file, run the following cell.
with open("guests.txt") as guests:
for line in guests:
print(line)
# The current names in the *guests.txt* file should be: Bob, Polly, Sam, Danielle and Jacob.
# <br><br>
# Were the names of the checked out guests correctly removed from the *guests.txt* file? If not, go back and edit your code making sure to fill in the gaps appropriately so that the checked out guests are correctly removed from the *guests.txt* file. Once the checked out guests are successfully removed, you have filled in the missing code correctly. Awesome!
# <br><br>
# Now let's check whether Bob and Andrea are still checked in. How could we do this? We'll just read through each line in the file to see if their name is in there. Run the following code to check whether Bob and Andrea are still checked in.
# +
guests_to_check = ['Bob', 'Andrea']
checked_in = []
with open("guests.txt","r") as guests:
for g in guests:
checked_in.append(g.strip())
for check in guests_to_check:
if check in checked_in:
print("{} is checked in".format(check))
else:
print("{} is not checked in".format(check))
# -
# We can see that Bob is checked in while Andrea is not. Nice work! You've learned the basics of reading and writing files in Python!
| Using Python to Interact with the Operating System/WEEK 2/Reading_And_Writing_Files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from LDA_function import tf, my_lda_func
import nltk
from nltk.corpus import reuters
import matplotlib.pyplot as plt
# %matplotlib inline
nltk.download('reuters')
plt.style.use('ggplot')
# -
# Sample data for analysis
d1 = "Java is a language for programming that develops a software for several platforms. A compiled code or bytecode on Java application can run on most of the operating systems including Linux, Mac operating system, and Linux. Most of the syntax of Java is derived from the C++ and C languages."
d2 = "Python supports multiple programming paradigms and comes up with a large standard library, paradigms included are object-oriented, imperative, functional and procedural."
d3 = "Go is typed statically compiled language. It was created by <NAME>, <NAME>, and <NAME> in 2009. This language offers garbage collection, concurrency of CSP-style, memory safety, and structural typing."
d4 = "A young girl when she first visited magical Underland, <NAME> (<NAME>) is now a teenager with no memory of the place -- except in her dreams."
d5 = "Her life takes a turn for the unexpected when, at a garden party for her fiance and herself, she spots a certain white rabbit and tumbles down a hole after him. Reunited with her friends the Mad Hatter (<NAME>), the Cheshire Cat and others, Alice learns it is her destiny to end the Red Queen's (Helena Bonham Carter) reign of terror."
# +
# Using slow version tf_df
tf_df, id2word = tf([d1, d2, d3, d4, d5])
lil = []
for row in tf_df.values:
lil_sub = []
for idx, item in enumerate(row):
if item:
lil_sub.append((idx, item))
lil.append(lil_sub)
shown, gamma_by_chunks = my_lda_func(corpus=lil, num_topics=2, id2word=id2word, topics_only=False, num_words=10, verbose=False, passes=10)
# -
shown
gamma_by_chunks
# ## Simulated Data (Sleep & Vaccine Policy)
sleep = pd.read_csv('sleep_diet_exercise.csv', header=None)
docs = [i[0] for i in sleep.values]
len(docs)
# +
tf_df, id2word = tf(docs)
lil = []
for row in tf_df.values:
lil_sub = []
for idx, item in enumerate(row):
if item:
lil_sub.append((idx, item))
lil.append(lil_sub)
simu_result = my_lda_func(corpus=lil, num_topics=2, id2word=id2word, num_words=10, chunksize=20, passes=10, verbose=False)
# -
simu_result
def parse_result(result):
"""
This function is used to reorganize the result of my_lda_func.
"""
result_dic = {}
for topic_num, dist in result:
unpack = []
for obj in dist.split('+'):
prob, word = obj.split('*')
unpack.append((float(prob), word.strip().strip('"')))
prob, word = zip(*unpack)
result_dic[topic_num] = [prob, word]
return result_dic
# Parsed result for simulated data
simulated_data_result = parse_result(simu_result)
simulated_data_result
# +
# Bar plots for simulated data
fig, axs = plt.subplots(2, 1, figsize=(12, 12))
cmap = ['lightsteelblue', 'pink', 'darkgrey', 'khaki', 'lightsalmon', 'darkseagreen']
# make a little extra space between the subplots
fig.subplots_adjust(hspace=0.3)
simu_dic = parse_result(simu_result)
for idx, ax in enumerate(axs):
probability, words = simu_dic[idx]
ax.bar(words, probability, color=cmap[idx])
ax.set_xlabel("Word")
ax.set_ylabel("Probability")
ax.set_title(f"Topic {idx+1}")
# plt.savefig('figures/simulated_data_result.jpg')
plt.show()
# -
# ## Real-world Data 1: Reuters
np.random.seed(1)
ntotal=1000
documents = reuters.fileids()
documents=np.random.choice(documents,ntotal)
docs=[reuters.raw(d) for d in documents]
len(docs)
# +
tf_df, id2word = tf(docs)
lil = []
for row in tf_df.values:
lil_sub = []
for idx, item in enumerate(row):
if item:
lil_sub.append((idx, item))
lil.append(lil_sub)
real_result_1 = my_lda_func(corpus=lil, num_topics=4, id2word=id2word, num_words=10, chunksize=20, passes=10)
# -
# Parsed result for real data 1 from Reuters
reuters_data_result = parse_result(real_result_1)
reuters_data_result
# +
# Bar plots for simulated data
fig, axs = plt.subplots(2, 2, figsize=(17, 7))
cmap = ['lightsteelblue', 'pink', 'darkgrey', 'khaki', 'lightsalmon', 'darkseagreen']
# make a little extra space between the subplots
real_result_1_dic = parse_result(real_result_1)
for idx, ax in enumerate(axs.ravel()):
probability, words = real_result_1_dic[idx]
ax.bar(words, probability, color=cmap[idx])
ax.set_xlabel("Word")
ax.set_ylabel("Probability")
ax.set_title(f"Topic {idx+1}")
plt.tight_layout()
# plt.savefig('figures/reuters_data_result.jpg')
plt.show()
# -
# ## Real-world Data 2: Tweet
# +
# Real world sample data
raw_tweets = pd.read_csv('clean_tweets.csv')
tweets_list = raw_tweets.Tweets.values.tolist()
len(tweets_list)
# +
tf_df, id2word = tf(tweets_list)
lil = []
for row in tf_df.values:
lil_sub = []
for idx, item in enumerate(row):
if item:
lil_sub.append((idx, item))
lil.append(lil_sub)
# + tags=[]
real_result_2 = my_lda_func(corpus=lil, num_topics=6, id2word=id2word, num_words=10, chunksize=100, verbose=False, passes=10)
# -
# Parsed result for real data 2 from Tweet
tweet_data_result = parse_result(real_result_2)
tweet_data_result
# +
# Bar plots for simulated data
fig, axs = plt.subplots(3, 2, figsize=(17, 6))
cmap = ['lightsteelblue', 'pink', 'darkgrey', 'khaki', 'lightsalmon', 'darkseagreen']
# make a little extra space between the subplots
real_result_2_dic = parse_result(real_result_2)
for idx, ax in enumerate(axs.ravel()):
probability, words = real_result_2_dic[idx]
ax.bar(words, probability, color=cmap[idx])
ax.set_xlabel("Word")
ax.set_ylabel("Probability")
ax.set_title(f"Topic {idx+1}")
plt.tight_layout()
# plt.savefig('figures/tweet_data_result.jpg')
plt.show()
| src/Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ##### CNN-RNN
# + id="inaDTTsfjf_B"
import tensorflow as tf
import pandas as pd
import gensim
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv1D, Flatten, MaxPooling1D, LSTM, Embedding, Input, Dropout
from sklearn.model_selection import train_test_split
from sklearn.metrics import multilabel_confusion_matrix
from numpy import unique
from tensorflow.keras import layers
from tensorflow.math import confusion_matrix
# + colab={"base_uri": "https://localhost:8080/", "height": 250} id="yWddMcVkjzp9" outputId="e0920ee9-ca0f-44fb-c35b-9448eee97846"
test = pd.read_csv("../Datasets/FA-KESDataset/FA-KES-Dataset.csv")
test.columns
test.head()
# + colab={"base_uri": "https://localhost:8080/"} id="P6xCM8fqkHAF" outputId="e6fef559-2123-46d0-e94a-e807b5e5d1fd"
test['labels'].value_counts()
# + colab={"base_uri": "https://localhost:8080/", "height": 406} id="UQFlwqRHkJgR" outputId="f552321c-fbe7-460f-9edd-76fea00cf63d"
plt.figure(figsize=(10,6))
sns.countplot(x='labels', data=test)
# + id="KXHfauGSkLjg"
test['article_content'] = test['article_content'].apply(lambda x: str(x).lower())
# + id="5nHf76MrkN_l"
test = test[['article_content', 'labels']]
# + colab={"base_uri": "https://localhost:8080/"} id="Z1Xm1g_4kQ4I" outputId="b91f9ce0-3597-4357-cd1b-92497a054526"
test['labels'] = test['labels'].astype(float)
# + colab={"base_uri": "https://localhost:8080/"} id="vOM2g_oBkSMV" outputId="c61410c1-d43b-49f9-af6e-6f8d36ba9c75"
# !pip install spacy==2.2.3
# !python -m spacy download en_core_web_sm
# !pip install beautifulsoup4==4.9.1
# !pip install textblob==0.15.3
# !pip install git+https://github.com/laxmimerit/preprocess_kgptalkie.git --upgrade --force-reinstall
# + id="ZZPCf4ShkUDi"
import preprocess_kgptalkie as ps
# + id="NltNzKwwkWUn"
test['article_content'] = test['article_content'].apply(lambda x: ps.remove_special_chars(x))
# + id="hxcw8yFskXuK"
x = [d.split() for d in test['article_content'].tolist()]
y = test['labels'].values
# + colab={"base_uri": "https://localhost:8080/"} id="tnlLkzJGkcq9" outputId="afd4ee33-5bb1-4af7-81cf-b15d14b3c699"
pip install imbalanced-learn
# + colab={"base_uri": "https://localhost:8080/"} id="zb-vX7rvkeKe" outputId="9dc3e79f-91c5-472e-ed7b-5a346ab57115"
print(len(x))
# + id="BdEhuTOskg51"
DIM = 400
w2v_model = gensim.models.Word2Vec(sentences=x, vector_size=DIM, window=10, min_count=1)
# + id="iFBb336ck2-S"
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(x)
# + id="jd604c77k4SX"
x = tokenizer.texts_to_sequences(x)
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="r-V0IBZBk5fN" outputId="bafba1ad-d55a-467f-fe34-3a40533d69fa"
plt.hist([len(a) for a in x], bins = 700)
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="uegb7kuRk6xQ" outputId="c3656d22-e41d-4db6-a341-0e1514d2a0eb"
nos = np.array([len(a) for a in x])
len(nos[nos>1000])
# + id="JfCZt1NXk88H"
maxlen = 100
x = tf.keras.preprocessing.sequence.pad_sequences(x, maxlen=maxlen)
# + id="6PxJgLW4k-P0"
vocab_size = len(tokenizer.word_index) + 1
vocab = tokenizer.word_index
# + id="ToVB6dVqlAI9"
def get_weight_matrix(model):
weight_matrix = np.zeros((vocab_size, DIM))
for word, i in vocab.items():
try:
weight_matrix[i] = model.wv[word]
except:
print("whatever")
return weight_matrix
# + id="_SnH_lYDlB7x"
embedding_vectors = get_weight_matrix(w2v_model)
# + colab={"base_uri": "https://localhost:8080/"} id="y5X_QRfTlDNE" outputId="348c26e8-054b-4397-a699-592bf4cd356d"
embedding_vectors.shape
# + colab={"base_uri": "https://localhost:8080/"} id="6VbqeGVclEdX" outputId="e9d5c5f7-5f34-4a7e-c9c2-5ce5565c2d63"
maxlen
# + colab={"base_uri": "https://localhost:8080/"} id="Yo3h6MSclGoJ" outputId="b2b44345-c547-4930-d017-7ed07a55065f"
model = Sequential()
model.add(Embedding(vocab_size, output_dim=DIM, weights=[embedding_vectors], input_length=maxlen, trainable = False))
model.add(Conv1D(128, 5, activation="relu"))
model.add(MaxPooling1D(pool_size=2, strides=2, padding="valid"))
model.add(LSTM(32))
model.add(Dense(1, activation = 'sigmoid'))
model.compile(loss = tf.keras.losses.BinaryCrossentropy(),
optimizer = "Adam",
metrics = ['accuracy', 'Recall', 'Precision', 'TrueNegatives', 'TruePositives', 'FalsePositives', 'FalseNegatives'])
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="1HZYbnm0lIT9" outputId="eacbd1d3-aead-4b40-91b1-feeb430e0ca3"
len(x)
# + id="shfeTsK3lKGP"
CNNRNNX_train, CNNRNNX_test, CNNRNNy_train, CNNRNNy_test = train_test_split(x, y, test_size=0.2, random_state=42)
# + id="TXzCG8wplM7E"
CNNRNNy_train = np.array(CNNRNNy_train)
CNNRNNy_test = np.array(CNNRNNy_test)
# + colab={"base_uri": "https://localhost:8080/"} id="YCxAl3iXlOJw" outputId="83bb8e25-d229-4a3a-c93a-fb06ac243459"
model.fit(CNNRNNX_train, CNNRNNy_train, epochs=100, batch_size=64)
# + colab={"base_uri": "https://localhost:8080/"} id="uw2-PcsxlQSF" outputId="9913a73e-81ac-43f9-f319-e5f01f861681"
model.evaluate(CNNRNNX_test, CNNRNNy_test)
# -
# # URL Classifier
# + colab={"base_uri": "https://localhost:8080/"} id="BjG8HpYVH6jT" outputId="29db936c-45b7-427c-9671-abf05356ea02"
# !pip install whois
# !pip install pyquery
# !pip install tqdm
# !pip install interruptingcow
# !pip install requests
# + id="NpwL93D5JDWv"
import requests
from interruptingcow import timeout
import whois
from datetime import datetime, timezone
import math
import pandas as pd
import numpy as np
from pyquery import PyQuery
from requests import get
import tensorflow as tf
import pandas as pd
import gensim
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv1D, Flatten, MaxPooling1D, LSTM, Embedding, Input
from sklearn.model_selection import train_test_split
from sklearn.metrics import multilabel_confusion_matrix
from numpy import unique
from tensorflow.keras import layers
from tensorflow.math import confusion_matrix
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
# + id="XLLT4caMmHOH"
class UrlFeaturizer(object):
def __init__(self, url):
self.url = url
self.domain = url.split('//')[-1].split('/')[0]
self.today = datetime.now().replace(tzinfo=None)
try:
self.whois = whois.query(self.domain).__dict__
except:
self.whois = None
try:
self.response = get(self.url)
self.pq = PyQuery(self.response.text)
except:
self.response = None
self.pq = None
## URL string Features
def entropy(self):
string = self.url.strip()
prob = [float(string.count(c)) / len(string) for c in dict.fromkeys(list(string))]
entropy = sum([(p * math.log(p) / math.log(2.0)) for p in prob])
return entropy
def ip(self):
string = self.url
flag = False
if ("." in string):
elements_array = string.strip().split(".")
if(len(elements_array) == 4):
for i in elements_array:
if (i.isnumeric() and int(i)>=0 and int(i)<=255):
flag=True
else:
flag=False
break
if flag:
return 1
else:
return 0
def numDigits(self):
digits = [i for i in self.url if i.isdigit()]
return len(digits)
def urlLength(self):
return len(self.url)
def numParameters(self):
params = self.url.split('&')
return len(params) - 1
def numFragments(self):
fragments = self.url.split('#')
return len(fragments) - 1
def numSubDomains(self):
subdomains = self.url.split('http')[-1].split('//')[-1].split('/')
return len(subdomains)-1
def domainExtension(self):
ext = self.url.split('.')[-1].split('/')[0]
return ext
## URL domain features
def hasHttp(self):
return 'http:' in self.url
def hasHttps(self):
return 'https:' in self.url
def daysSinceRegistration(self):
if self.whois and self.whois['creation_date']:
diff = self.today - self.whois['creation_date'].replace(tzinfo=None)
diff = str(diff).split(' days')[0]
return diff
else:
return 0
def daysSinceExpiration(self):
if self.whois and self.whois['expiration_date']:
diff = self.whois['expiration_date'].replace(tzinfo=None) - self.today
diff = str(diff).split(' days')[0]
return diff
else:
return 0
## URL Page Features
def bodyLength(self):
if self.pq is not None:
return len(self.pq('html').text()) if self.urlIsLive else 0
else:
return 0
def numTitles(self):
if self.pq is not None:
titles = ['h{}'.format(i) for i in range(7)]
titles = [self.pq(i).items() for i in titles]
return len([item for s in titles for item in s])
else:
return 0
def numImages(self):
if self.pq is not None:
return len([i for i in self.pq('img').items()])
else:
return 0
def numLinks(self):
if self.pq is not None:
return len([i for i in self.pq('a').items()])
else:
return 0
def scriptLength(self):
if self.pq is not None:
return len(self.pq('script').text())
else:
return 0
def specialCharacters(self):
if self.pq is not None:
bodyText = self.pq('html').text()
schars = [i for i in bodyText if not i.isdigit() and not i.isalpha()]
return len(schars)
else:
return 0
def scriptToSpecialCharsRatio(self):
v = self.specialCharacters()
if self.pq is not None and v!=0:
sscr = self.scriptLength()/v
else:
sscr = 0
return sscr
def scriptTobodyRatio(self):
v = self.bodyLength()
if self.pq is not None and v!=0:
sbr = self.scriptLength()/v
else:
sbr = 0
return sbr
def bodyToSpecialCharRatio(self):
v = self.bodyLength()
if self.pq is not None and v!=0:
bscr = self.specialCharacters()/v
else:
bscr = 0
return bscr
def urlIsLive(self):
return self.response == 200
def run(self):
data = {}
data['entropy'] = self.entropy()
data['numDigits'] = self.numDigits()
data['urlLength'] = self.urlLength()
data['numParams'] = self.numParameters()
data['hasHttp'] = self.hasHttp()
data['hasHttps'] = self.hasHttps()
data['urlIsLive'] = self.urlIsLive()
data['bodyLength'] = self.bodyLength()
data['numTitles'] = self.numTitles()
data['numImages'] = self.numImages()
data['numLinks'] = self.numLinks()
data['scriptLength'] = self.scriptLength()
data['specialChars'] = self.specialCharacters()
data['ext'] = self.domainExtension()
data['dsr'] = self.daysSinceRegistration()
data['dse'] = self.daysSinceExpiration()
data['sscr'] = self.scriptToSpecialCharsRatio()
data['sbr'] = self.scriptTobodyRatio()
data['bscr'] = self.bodyToSpecialCharRatio()
data['num_%20'] = self.url.count("%20")
data['num_@'] = self.url.count("@")
data['has_ip'] = self.ip()
return data
# + colab={"base_uri": "https://localhost:8080/", "height": 250} id="yOHSFkYYJO2-" outputId="9b9f77f5-e467-4d5d-ff92-9efb5a6fa6c6"
data = pd.read_csv('../Datasets/FA-KESDataset/FA-KES-Dataset.csv', names = ['id', 'title', 'content', 'source', 'date', 'location', 'rating', 'url'])
data.head(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="Shfj3_8fJP6P" outputId="d0e00c04-f131-43b4-9fac-4b19bb85540c"
data = data.drop(columns = ["source", "date", "location", "title", "content", "id"])
data = data.drop(data.index[0])
data.head(5)
# + id="_n1u-9THJVUV"
data['rating'].value_counts()
data['rating'] = data['rating'].astype(float)
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="LWpniW5hHEpg" outputId="ebbeea2c-5860-41de-c5a7-f7979e6ddd00"
data.head(5)
# + colab={"base_uri": "https://localhost:8080/"} id="loTVPgJGGaBf" outputId="5877c021-b06d-4175-d827-5e91c54cebc6" tags=[]
features_list = []
for idx, row in data.iterrows():
print(idx)
url_string = row['url']
rating = row['rating']
features = UrlFeaturizer(url_string).run()
features['rating'] = rating
features_list.append(features)
df = pd.DataFrame(features_list)
# + id="9kGFmmwrH-Uc"
df
# + id="KEySgjxs2Tii"
df.replace(True,1,inplace = True)
df.replace(False,0,inplace = True)
df.ext = pd.Categorical(df.ext).codes
print(df.ext.head(5))
y = df['rating']
encoder = LabelEncoder()
encoder.fit(y)
Y = encoder.transform(y)
scaler = MinMaxScaler(feature_range=(0, 1))
df = df.drop(columns = ['rating'])
X = pd.DataFrame(scaler.fit_transform(df))
# + id="QQcgEDOv7-U3"
X
# + id="fny7v8rv0jrr" tags=[]
len(Y)
# + id="wSoDDBx12UbG"
URLX_train, URLX_test, URLy_train, URLy_test = train_test_split(X, Y, test_size=0.2, random_state=42)
# + id="E_L9allpNbvZ"
from tensorflow.keras.layers import InputLayer
model2 = Sequential()
model2.add(InputLayer(22))
model2.add(Dense(256, input_dim = 22 , activation = 'relu'))
model2.add(Dense(128, activation = 'relu'))
model2.add(Dense(64, activation = 'relu'))
model2.add(Dense(32, activation = 'relu'))
model2.add(Dense(16, activation = 'relu'))
model2.add(Dense(1, activation = 'sigmoid'))
model2.compile(loss = 'binary_crossentropy' ,optimizer='adam' , metrics = ['accuracy', 'Recall', 'Precision', 'TrueNegatives', 'TruePositives', 'FalsePositives', 'FalseNegatives'] )
model2.summary()
# + id="61Q8DlEz58ib" tags=[]
model2.fit(URLX_train, URLy_train, epochs = 100)
# + id="lKFf5UfoJ50n"
model2.evaluate(URLX_test, URLy_test)
# + id="Qqn6qZBdGq2G"
df
# -
# # Combined Predictions
# + id="abjvowaaTx_l" tags=[]
CNNRNNaccuracy = {}
CNNRNNaccuracy['model'] = model.evaluate(
CNNRNNX_test, CNNRNNy_test, verbose=0)
CNNRNNaccuracy = CNNRNNaccuracy['model'][1]
# + id="IVcFngsRL7CS"
URLaccuracy = {}
URLaccuracy['model'] = model2.evaluate(
URLX_test, URLy_test, verbose=0)
URLaccuracy = URLaccuracy['model'][1]
# + id="JzCiYsO0u2_0"
# CNNRNNweight = CNNRNNaccuracy / (CNNRNNaccuracy + URLaccuracy)
# URLweight = URLaccuracy / (CNNRNNaccuracy + URLaccuracy)
CNNRNNweight = 0.6
URLweight = 0.4
print(CNNRNNweight, URLweight)
# + id="fa5X2V3mxK2C"
CNNRNNpredictions = model.predict(CNNRNNX_test)
# + id="HMihceiA4lA8"
URLpredictions = model2.predict(URLX_test)
# -
CNNRNNpredictions[0]*0.6
URLpredictions[0]*0.4
# + id="SUMsQxho4pHQ"
newCNNRNNpredictions = []
for x in range(len(CNNRNNpredictions)):
newCNNRNNpredictions.append([CNNRNNpredictions[x]*CNNRNNweight])
# + id="hqvOhOkEClkJ"
newURLpredictions = []
for x in range(len(URLpredictions)):
newURLpredictions.append([URLpredictions[x]*URLweight])
# + id="z0dPnyTODpYB"
combinedResultArray = []
for x in range(len(newCNNRNNpredictions)):
combinedResultArray.append([newCNNRNNpredictions[x][0]+newURLpredictions[x]])
# + id="0UK7bVc9FAI_"
for x in range(len(combinedResultArray)):
if combinedResultArray[x] >= [0.5]:
combinedResultArray[x] = [1]
else:
combinedResultArray[x] = [0]
# + tags=[]
len(URLy_test)
# +
# predict as true
combinedTP = 0
# predict as false
combinedTN = 0
# incorrectly predict as true
combinedFP = 0
# incorrectly predict as false
combinedFN = 0
for x in range(len(combinedResultArray)):
if combinedResultArray[x] == [1] and URLy_test[x] == 1: # with URLy_test having the correct labels for the testing set which the combined result mechanism uses
combinedTP += 1
elif combinedResultArray[x] == [0] and URLy_test[x] == 0:
combinedTN += 1
elif combinedResultArray[x] == [1] and URLy_test[x] == 0:
combinedFP += 1
elif combinedResultArray[x] == [0] and URLy_test[x] == 1:
combinedFN += 1
combinedTP, combinedTN, combinedFP, combinedFN
# +
combinedAcc = (combinedTP + combinedTN) / (combinedTP + combinedTN + combinedFP + combinedFN)
combinedRecall = combinedTP / (combinedTP + combinedFN)
combinedPrecision = combinedTP / (combinedTP + combinedFP)
print("Accuracy: " + str(combinedAcc) + "\nRecall: " + str(combinedRecall) + "\nPrecision: " + str(combinedPrecision) + "\nTN: " + str(combinedTN) + "\nTP: " +
str(combinedTP) + "\nFP: " + str(combinedFP) + "\nFN: " + str(combinedFN))
# + [markdown] id="PX2AQa6ArPOU" tags=[]
# # Inputting Singular URLs
# + id="rNqphNAVFAsa"
# -
| testing_stuff/.ipynb_checkpoints/FA_KES_Combined_CNNRNN_Classifier_+_URL_Classifier-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#from unified_bib import DataSlicer,PCA_calc,SODA,GroupingAlgorithm,Classification,tsfresh_chucksize, dynamic_tsfresh, PCA_projection, Model_Predict, classifiers_train
import unified_bib as uf
from importlib import reload
import numpy as np
import os
from sklearn.model_selection import train_test_split
# -
from sklearn.preprocessing import StandardScaler, Normalizer, MinMaxScaler
from tsfresh.feature_extraction import extract_features, ComprehensiveFCParameters
from tsfresh.feature_extraction.settings import from_columns
from tsfresh.feature_extraction import feature_calculators
from tsfresh.feature_extraction import EfficientFCParameters
import pickle
import pandas as pd
import tsfresh
from psutil import cpu_percent
from tsfresh import extract_features
from tsfresh import select_features
from tsfresh.utilities.dataframe_functions import impute
import matplotlib.pyplot as plt
# # Filtering TSFRESH Features
# - Given an `output_id`, this function withdraw all NaN features from the TSFRESH extraction
uf.tsfresh_NaN_filter('100')
# + [markdown] colab_type="text" id="EZFxLQalFQ7y"
# # Using original data
# ## Training phase
# -
output_id = '100'
full_data = np.genfromtxt('Input/Output_' + output_id + '.csv',
delimiter=',')
# +
target = full_data[:,-1]
ID = full_data[:,0]
diam_target = np.ones((target.shape))
diam_target[target==0] = 0
diam_target[0:112*750] = -1
unique_id = np.unique(ID).tolist()
id_train, id_test = train_test_split(unique_id, test_size=0.3)
train_index =[]
test_index =[]
for i in range(len(ID)):
if ID[i] in id_train:
train_index.append(i)
elif ID[i] in id_test:
test_index.append(i)
train_set = full_data[train_index,:]
test_set = full_data[test_index,:]
_,train_index_first_occurance = np.unique(train_set[:,0],return_index=True)
_,test_index_first_occurance = np.unique(test_set[:,0],return_index=True)
train_set_target = train_set[train_index_first_occurance.tolist(),-1]
test_set_target = test_set[test_index_first_occurance.tolist(),-1]
train_set_diam = diam_target[train_index_first_occurance.tolist()]
test_set_diam = diam_target[test_index_first_occurance.tolist()]
# -
SelectedFeatures = uf.tsfresh_chucksize(train_set,output_id)
#SelectedFeatures = uf.tsfresh_extraction(9)
ReducedFeatures = uf.PCA_calc(SelectedFeatures,3,'Calc') # (Feautures selecionadas, numero de PC's a manter, mode ('Test','Calc','Specific', 'Analytics'))
# ## Testing phase
extracted_features, ets = uf.dynamic_tsfresh(test_set, 'test')
projected_features = uf.PCA_projection(extracted_features)
uf.Classification_it(ReducedFeatures['ReducedFeatures'],projected_features,train_set_target,test_set_target)
# +
training_data = ReducedFeatures['ReducedFeatures']
test_data = projected_features
fig = plt.figure(figsize=[14,10])
ax = fig.add_subplot(111, projection='3d')
# Treino
ax.scatter(training_data[train_set_target== 0,0],
training_data[train_set_target== 0,1],
training_data[train_set_target== 0,2],
c = 'b',label='Treino-ferramenta boa')
ax.scatter(training_data[train_set_target == 1,0],
training_data[train_set_target == 1,1],
training_data[train_set_target == 1,2],
c = 'r',label='Treino-ferramenta ruim')
# Teste
ax.scatter(test_data[test_set_target== 0,0],
test_data[test_set_target== 0,1],
test_data[test_set_target== 0,2],
c = 'g',label='Teste-ferramenta boa')
ax.scatter(test_data[test_set_target == 1,0],
test_data[test_set_target == 1,1],
test_data[test_set_target == 1,2],
c = 'purple',label='Teste-ferramenta ruim')
plt.ylabel('PC2',fontsize = 20,labelpad=18)
plt.xlabel('PC1',fontsize = 20, labelpad=18)
ax.set_zlabel('PC3', fontsize = 20, labelpad=12)
plt.tick_params(axis='x', labelsize=16)
plt.tick_params(axis='y', labelsize=16)
plt.tick_params(axis='z', labelsize=16)
ax.grid()
plt.legend()
plt.show()
# -
# # Misc
reload(uf)
projected_features[:,1] *= -1
SODA_parameters, processing_parameters = uf.SODA(ReducedFeatures,2.5,8,0.25) # (Features reduzidas, granularidade mínima, granularidade máxima, passo)
ClassificationPar = uf.GroupingAlgorithm(SODA_parameters,95, processing_parameters) # (Labels do SODA, Porcentagem de definição, numero de ID's boas, parametros de processamento)
reload(uf)
# Alarm tag
while True:
os.system("printf '\a'") # or '\7'
time.sleep(0.001)
# Alarm tag
while True:
os.system("printf '\a'") # or '\7'
time.sleep(0.001)
test_tsfresh(SelectedFeatures,extracted_features)
os.chdir(r'C:\Users\mathe\OneDrive\Documentos\GitHub\Lathes_Tool_Project')
# +
# .Recovery
if True:
print('Recovery Control Output')
print('----------------------------------')
D_S_parameters = Recovery('D_S_parameters')
ExtractedNames = Recovery('ExtractedNames')
SelectedFeatures = Recovery('SelectedFeatures')
ReducedFeatures = Recovery('ReducedFeatures')
Output_ID = int(D_S_parameters['ID'])
print('The current Data ID is ', Output_ID)
print('__________________________________________')
| Python/JCAE_Major/.ipynb_checkpoints/model_notebook-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import warnings
warnings.filterwarnings("ignore")
from sklearn.datasets import load_iris
import pandas as pd
import numpy as np
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import (confusion_matrix, classification_report, precision_recall_fscore_support,
roc_auc_score, roc_curve, log_loss, auc)
from sklearn.multiclass import OneVsRestClassifier
##plotting
from plotly.offline import init_notebook_mode, iplot, download_plotlyjs
import plotly.offline as pyo
import cufflinks as cf
import matplotlib.pyplot as plt
init_notebook_mode(connected=True)
cf.go_offline()
# -
iris = load_iris()
df = pd.DataFrame(np.concatenate((iris.data.reshape(-1,4),iris.target.reshape(-1,1)),axis=1))
df.columns =[feature[:-5] for feature in iris.feature_names]+['target']
df['target'] = df.target.apply(lambda x: iris['target_names'][int(x)])
# df.drop(target,axis=1,inplace=True)
pyo.iplot(
{
'data': [
{
'x': df[df['target']==label]['petal width'],
'y': df[df['target']==label]['petal length'],
'name': label, 'mode': 'markers',
} for label in iris.target_names
],
'layout': {
'xaxis': {'title': 'petal width'},
'yaxis': {'title': "petal length"}
}
})
df = pd.get_dummies(df)
target = [col for col in df.columns if col.startswith('target')]
variables = list(set(df.columns) - set([col for col in df.columns if col.startswith('target')]))
X_train, X_test, y_train, y_test = train_test_split(df[variables],df[target],test_size=0.2,random_state=42)
rf = RandomForestClassifier(random_state=42, max_depth=4, n_estimators=10, min_samples_leaf=3)
rf.fit(X_train,y_train)
nb = OneVsRestClassifier(GaussianNB())
nb.fit(X_train,y_train)
svm = OneVsRestClassifier(SVC(random_state=42,probability=True,kernel='linear',gamma='auto'))
svm.fit(X_train,y_train)
def multi_label_confusion_matrix(X, y, clf):
zipped = np.dstack((y.values, clf.predict(X)))
# else:
# lr_pred = np.zeros_like(y)
# for ind,col_ind in enumerate(np.argmax(clf.predict_proba(X),axis=1)):
# lr_pred[ind, col_ind] = 1
# zipped = np.dstack((y.values, lr_pred))
conf_matrix = np.zeros([y.shape[1],y.shape[1]])
for rows in zipped:
if len(np.where(rows[:,1]==1)[0])>0:
conf_matrix[np.where(rows[:,0]==1)[0][0],np.where(rows[:,1]==1)[0][0]] += 1
return conf_matrix
# Metrics for classification:
# <br>
# <br>
# <font size='4'>
# <center>
# $Accuracy = \frac{TP+TN}{TP+TN+FP+FN}$ | $Precision = \frac{TP}{TP+FP}$ | $Recall = \frac{TP}{TP+FN}$ | $F1 = \frac{2}{\frac{1}{Precision}+\frac{1}{Recall}}$
# </center>
# </font>
y_train.columns
print(multi_label_confusion_matrix(X_train, y_train, rf))
print(classification_report(y_train,rf.predict(X_train)))
# ### Cohen's Kappa statistic
# <br>
# <br>
# Formula for the kappa statistic:
# <br>
# <center>
# <font size="4">
# $\kappa = \frac{(observed\space accuracy - expected\space accuracy)}{(1 - expected\space accuracy)}$
# </font>
# </center>
rf_conf_matrix = multi_label_confusion_matrix(X_train, y_train, rf)
rf_conf_matrix
with_marginal = np.zeros((4,4))
with_marginal[:3,:3] = rf_conf_matrix
for i in range(3):
with_marginal[3,i] = with_marginal[:3,i].sum()
with_marginal[i,3] = with_marginal[i,:3].sum()
with_marginal[3,3] = with_marginal[:3,:3].sum()
with_marginal_prob = with_marginal/120
print(with_marginal,'\n\n',with_marginal_prob)
observed_accuracy = rf_conf_matrix.diagonal().sum()/rf_conf_matrix.sum()
observed_accuracy
expected_accuracy = 0
for i in range(3):
expected_accuracy += with_marginal_prob[i,3]*with_marginal_prob[3,i]
expected_accuracy
kappa = (observed_accuracy - expected_accuracy)/(1 - expected_accuracy)
print(multi_label_confusion_matrix(X_train, y_train, nb))
print(classification_report(y_train,nb.predict(X_train)))
print(multi_label_confusion_matrix(X_train, y_train, svm))
print(classification_report(y_train,svm.predict(X_train)))
# # Log loss
# Formula for binary classification, where N is the number of observations:
# <br>
# <br>
# <font size="4.5">
# <center>
# $logloss = -\frac{1}{N} \sum_{i=1}^{N} (y_{i} \log p_{i} + (1 - y_{i}) \log (1 - p_{i}))$
# </center>
# </font>
# <br>
# <br>
# Formula for Multiclass classification, where N is the number of observations and M the number of classes:
# <br>
# <br>
# <font size="4.5">
# <center>
# $logloss = -\frac{1}{N} \sum_{i=1}^{N}\sum_{c=1}^{M} y_{i,c} \log p_{i,c}$
# </center>
# </font>
log_loss = -np.nan_to_num(np.log(pred_proba_rf(rf, X_train, y_train))*y_train.values).sum()/y_train.shape[0]
print("Random forest log loss: %0.4f" %log_loss)
log_loss = -np.nan_to_num(np.log(nb.predict_proba(X_train))*y_train.values).sum()/y_train.shape[0]
print("Naive Bayes log loss: %0.4f" %log_loss)
log_loss = -np.nan_to_num(np.log(svm.predict_proba(X_train))*y_train.values).sum()/y_train.shape[0]
print("SVM log loss: %0.4f" %log_loss)
def pred_proba_rf(rf, X, y):
return np.dstack(np.array([pred[:,1] for pred in rf.predict_proba(X)])).reshape(y.shape)
fpr_0, tpr_0, thr_0 = roc_curve(1*(y_train.idxmax(axis=1)=='target_setosa'),pred_proba_rf(rf, X_train, y_train)[:,0])
fpr_1, tpr_1, thr_1 = roc_curve(1*(y_train.idxmax(axis=1)=='target_versicolor'),pred_proba_rf(rf, X_train, y_train)[:,1])
fpr_2, tpr_2, thr_2 = roc_curve(1*(y_train.idxmax(axis=1)=='target_virginica'),pred_proba_rf(rf, X_train, y_train)[:,2])
fpr_micro, tpr_micro, thr_micro = roc_curve(y_train.values.ravel(), pred_proba_rf(rf, X_train, y_train).ravel())
ind_series = pd.Series(np.sort(np.unique(np.concatenate((fpr_0,fpr_1,fpr_2)))), name='False Positive Rate')
roc_c = pd.merge(ind_series.to_frame(), pd.DataFrame(np.array([fpr_0,tpr_0]).T).rename(columns={0:'False Positive Rate',1:'Recall_setosa'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_1,tpr_1]).T).rename(columns={0:'False Positive Rate',1:'Recall_versicolor'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_2,tpr_2]).T).rename(columns={0:'False Positive Rate',1:'Recall_virginica'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_micro,tpr_micro]).T).rename(columns={0:'False Positive Rate',1:'Recall_micro'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c.set_index('False Positive Rate',inplace=True)
roc_c['Recall_random'] = roc_c.index#np.array([0.0]+[np.nan]*(roc_c.shape[0]-2)+[1.0])
for col in roc_c.columns:
roc_c[col] = roc_c[col].interpolate()
roc_c['Recall_macro'] = (roc_c['Recall_setosa']+roc_c['Recall_virginica']+roc_c['Recall_versicolor'])/3.0
roc_c.iplot(yTitle='Recall',xTitle='False Positive Rate',title='ROC curve for Random Forest classifier')
fpr_0, tpr_0, thr_0 = roc_curve(1*(y_train.idxmax(axis=1)=='target_setosa'),nb.predict_proba(X_train)[:,0])
fpr_1, tpr_1, thr_1 = roc_curve(1*(y_train.idxmax(axis=1)=='target_versicolor'),nb.predict_proba(X_train)[:,1])
fpr_2, tpr_2, thr_2 = roc_curve(1*(y_train.idxmax(axis=1)=='target_virginica'),nb.predict_proba(X_train)[:,2])
fpr_micro, tpr_micro, thr_micro = roc_curve(y_train.values.ravel(), nb.predict_proba(X_train).ravel())
ind_series = pd.Series(np.sort(np.unique(np.concatenate((fpr_0,fpr_1,fpr_2)))), name='False Positive Rate')
roc_c = pd.merge(ind_series, pd.DataFrame(np.array([fpr_0,tpr_0]).T).rename(columns={0:'False Positive Rate',1:'Recall_setosa'}), how='left')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_1,tpr_1]).T).rename(columns={0:'False Positive Rate',1:'Recall_versicolor'}), how='left')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_2,tpr_2]).T).rename(columns={0:'False Positive Rate',1:'Recall_virginica'}), how='left')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_micro,tpr_micro]).T).rename(columns={0:'False Positive Rate',1:'Recall_micro'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c.set_index('False Positive Rate',inplace=True)
roc_c['Recall_random'] = roc_c.index#np.array([0.0]+[np.nan]*(roc_c.shape[0]-2)+[1.0])
for col in roc_c.columns:
roc_c[col] = roc_c[col].interpolate()
roc_c['Recall_macro'] = (roc_c['Recall_setosa']+roc_c['Recall_virginica']+roc_c['Recall_versicolor'])/3.0
roc_c.iplot(yTitle='Recall',xTitle='False Positive Rate',title='ROC curve for OvR Naive Bayes classifier')
fpr_0, tpr_0, thr_0 = roc_curve(1*(y_train.idxmax(axis=1)=='target_setosa'),svm.predict_proba(X_train)[:,0])
fpr_1, tpr_1, thr_1 = roc_curve(1*(y_train.idxmax(axis=1)=='target_versicolor'),svm.predict_proba(X_train)[:,1])
fpr_2, tpr_2, thr_2 = roc_curve(1*(y_train.idxmax(axis=1)=='target_virginica'),svm.predict_proba(X_train)[:,2])
fpr_micro, tpr_micro, thr_micro = roc_curve(y_train.values.ravel(), svm.predict_proba(X_train).ravel())
ind_series = pd.Series(np.sort(np.unique(np.concatenate((fpr_0,fpr_1,fpr_2,fpr_micro)))), name='False Positive Rate')
roc_c = pd.merge(ind_series.to_frame(), pd.DataFrame(np.array([fpr_0,tpr_0]).T).rename(columns={0:'False Positive Rate',1:'Recall_setosa'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_1,tpr_1]).T).rename(columns={0:'False Positive Rate',1:'Recall_versicolor'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_2,tpr_2]).T).rename(columns={0:'False Positive Rate',1:'Recall_virginica'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c = pd.merge(roc_c, pd.DataFrame(np.array([fpr_micro,tpr_micro]).T).rename(columns={0:'False Positive Rate',1:'Recall_micro'}), how='left', left_on='False Positive Rate', right_on='False Positive Rate')
roc_c.set_index('False Positive Rate',inplace=True)
roc_c['Recall_random'] = roc_c.index#np.array([0.0]+[np.nan]*(roc_c.shape[0]-2)+[1.0])
for col in roc_c.columns:
roc_c[col] = roc_c[col].interpolate()
roc_c['Recall_macro'] = (roc_c['Recall_setosa']+roc_c['Recall_virginica']+roc_c['Recall_versicolor'])/3.0
roc_c.iplot(yTitle='Recall',xTitle='False Positive Rate',title='ROC curve for OvR SVM classifier')
# ## Other datasets
from sklearn.datasets import make_classification
dataset, target = make_classification(n_samples=10000,
n_features=12,
n_informative=7,
n_redundant=5,
n_classes=4,
random_state=42)
df2 = pd.DataFrame(dataset)
df2['target'] = target
df2.corr()
| MultiClass_Classification_metrics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/KevinTheRainmaker/Recommendation_Algorithms/blob/main/colab/fastcampus/Matrix_Factorization_Trial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="3xgMjHOXeC4a"
# # Matrix Factorization
#
# - MovieLens 데이터셋 활용
# - SVD를 직접 구현하고, 적절한 k값 찾기
# - matrix factorization을 간단히 할 수 있는 파이썬 라이브러리 소개
# + colab={"base_uri": "https://localhost:8080/"} id="j-BKixIEd_S9" outputId="2b755501-0732-42c0-e4c3-ac8d5b3419d6"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="pcYMBuuPepPU"
# ## Import packages
# + id="mCX8tdnWeZ1f"
import os
import pandas as pd
import numpy as np
from math import sqrt
from tqdm.notebook import tqdm
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
# + [markdown] id="8RS_EZw-fLx1"
# ## Load Dataset
# + id="23Rk-IpSfAsN"
path = '/content/drive/MyDrive/data/movielens'
ratings_df = pd.read_csv(os.path.join(path, 'ratings.csv'), encoding='utf-8')
# + colab={"base_uri": "https://localhost:8080/"} id="qR6djrLZfj2j" outputId="88f506eb-348b-4029-ab49-a506e45737af"
ratings_df.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="YepCsegOfl50" outputId="d4fc910c-275d-4f4b-d342-88e976bd6c13"
ratings_df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="x92GXVFbT60e" outputId="dca1f111-cdb2-406b-b0b5-be53dcd972cc"
ratings_df['userId'].unique()
# + colab={"base_uri": "https://localhost:8080/"} id="Pxadr8xXfnJb" outputId="25df2acb-f8b0-4388-e1cc-3111754d8c25"
train_df, test_df = train_test_split(ratings_df, test_size=0.2, random_state=1234)
print(train_df.shape)
print(test_df.shape)
# + [markdown] id="XagsmsUHgIVl"
# ### Make Sparse Matrix
# + id="L90pRjJQf2kJ"
sparse_matrix = train_df.groupby('movieId').apply(lambda x : pd.Series(x['rating'].values, index=x['userId'])).unstack()
sparse_matrix.index.name = 'movieId'
# + id="dBO_cPHugg--"
# fill sparse matrix with average of movie ratings
sparse_matrix_withmovie = sparse_matrix.apply(lambda x: x.fillna(x.mean()), axis=1)
# fill sparse matrix with average of user ratings
sparse_matrix_withuser = sparse_matrix.apply(lambda x: x.fillna(x.mean()), axis=0)
# + colab={"base_uri": "https://localhost:8080/"} id="eaLFJ_Lt8ojf" outputId="7a6495d9-4fbf-4a58-cd1d-551f52f2bd9b"
print(sparse_matrix_withmovie.shape)
print(sparse_matrix_withuser.shape)
# + [markdown] id="6ZEiAOJEhSRe"
# ## Matrix Factorization with SVD
# + id="H1ivCqdvhATl"
def get_svd(s_matrix, k=300):
u, s, vt = np.linalg.svd(s_matrix.transpose()) # left singular vector u / right singular vector vt
S = s[:k] * np.identity(k, np.float)
T = u[:, :k]
Dt = vt[:k, :]
item_factors = np.transpose(np.matmul(S,Dt))
user_factors = np.transpose(T)
return item_factors, user_factors
# + [markdown] id="h0h2OvXL6wA3"
# ### 1. with average movie ratings
# + id="xx2VGiMQinCb"
item_factors, user_factors = get_svd(sparse_matrix_withmovie)
prediction_result_df = pd.DataFrame(np.matmul(item_factors, user_factors),
columns=sparse_matrix_withmovie.columns.values,
index=sparse_matrix_withmovie.index.values)
movie_prediction_result_df = prediction_result_df.transpose()
# + colab={"base_uri": "https://localhost:8080/"} id="xf1cs52i8KVI" outputId="315d2e07-a1e2-4e8e-9544-e8c9b8c2c62c"
print(item_factors.shape)
print(user_factors.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 256} id="ria80UXc8W6L" outputId="37f53523-d93d-4d3a-d316-49b414644685"
movie_prediction_result_df.head()
# + [markdown] id="65zMao_28z1O"
# ### 2. with average user ratings
# + id="-UEgWggq8yyz"
item_factors, user_factors = get_svd(sparse_matrix_withuser)
prediction_result_df = pd.DataFrame(np.matmul(item_factors, user_factors),
columns=sparse_matrix_withuser.columns.values,
index=sparse_matrix_withuser.index.values)
user_prediction_result_df = prediction_result_df.transpose()
# + colab={"base_uri": "https://localhost:8080/", "height": 256} id="1HEXdnnb9QCd" outputId="3999a047-38d7-4a4f-d2ad-6afb0ac8f064"
user_prediction_result_df.head()
# + [markdown] id="PZKqgWAp9elK"
# ## Compare user and movie scenario
# + id="F1f2oiKE9bJY"
def evaluate(test_df, prediction_result_df):
groups_with_movie_ids = test_df.groupby(by='movieId')
groups_with_user_ids = test_df.groupby(by='userId')
intersection_movie_ids = sorted(list(set(list(prediction_result_df.columns)).intersection(set(list(groups_with_movie_ids.indices.keys())))))
intersection_user_ids = sorted(list(set(list(prediction_result_df.index)).intersection(set(groups_with_user_ids.indices.keys()))))
print(len(intersection_movie_ids))
print(len(intersection_user_ids))
compressed_prediction_df = prediction_result_df.loc[intersection_user_ids][intersection_movie_ids]
# test_df에 대해서 RMSE 계산
grouped = test_df.groupby(by='userId')
rmse_df = pd.DataFrame(columns=['rmse'])
for userId, group in tqdm(grouped):
if userId in intersection_user_ids:
pred_ratings = compressed_prediction_df.loc[userId][compressed_prediction_df.loc[userId].index.intersection(list(group['movieId'].values))]
pred_ratings = pred_ratings.to_frame(name='rating').reset_index().rename(columns={'index':'movieId','rating':'pred_rating'})
actual_ratings = group[['rating', 'movieId']].rename(columns={'rating':'actual_rating'})
final_df = pd.merge(actual_ratings, pred_ratings, how='inner', on=['movieId'])
final_df = final_df.round(4) # 반올림
if not final_df.empty:
rmse = sqrt(mean_squared_error(final_df['actual_rating'], final_df['pred_rating']))
rmse_df.loc[userId] = rmse
return final_df, rmse_df
# + colab={"base_uri": "https://localhost:8080/", "height": 361, "referenced_widgets": ["4c5245d21a094f38983af6a8edc9ce93", "e6d0e3100c494c65a5bef2bbedf99401", "52015e88b1704f8b9711da9c89d8216f", "cb841f9e26e54e19a0fac3ae106a577d", "c68a0916fce94d47a1d0cc3e3094b6cf", "577a64aa3e454d49aecb632242fdd2a5", "c1417cdecd0b40b8a50a5bf3c93cba1f", "<KEY>", "f7ba98741346430987450ff88c926263", "95a959dac18d4c9cacf2c97f632707ea", "e6343f1704d24392b8b0874cd20fe47a"]} id="mbhauaCl-QVn" outputId="a31691dd-f236-4d08-9ff5-df0bf3cb6de3"
result_df, _ = evaluate(test_df, user_prediction_result_df)
print(result_df)
print("For user matrix")
print(f"RMSE: {sqrt(mean_squared_error(result_df['actual_rating'].values, result_df['pred_rating'].values))}")
# + colab={"base_uri": "https://localhost:8080/", "height": 361, "referenced_widgets": ["be080e6ce52844bca2766bba34d9bf47", "8666ff625a8745abac9d25f2c71b7683", "82516eb4cb9b4cc199c047930b64748a", "1c62ed835a704d83b0f796e7a5eff3c2", "<KEY>", "8deae69a932c4ddcb75e361d47f06497", "0fb74ff0f99849ce833affbfa8fec821", "50f4e9076f224b3787f8079b9233840c", "37b81086eec742498409d4687ffd09e7", "baa64a36381b45b8a5d9b75e1c7ccd59", "3b590d4362ec4572bcbe1544de3cad80"]} id="dW8nWIZ9-TMf" outputId="314d1fea-414e-418b-eb69-1d4853e72a61"
result_df, _ = evaluate(test_df, movie_prediction_result_df)
print(result_df)
print("For movie matrix")
print(f"RMSE: {sqrt(mean_squared_error(result_df['actual_rating'].values, result_df['pred_rating'].values))}")
# + [markdown] id="Dm8gUa7U-bpu"
# ## Experiments on different k with grid search
# + id="6LcKq7Kc-X3-"
def find_best_k(sparse_matrix, maximum_k=100):
print("\nFind best optimized k for Matrix Factorization")
k_candidates = np.arange(50, maximum_k, 10)
final_df = pd.DataFrame(columns=['rmse'], index=k_candidates)
for k in tqdm(k_candidates):
item_factors, user_factors = get_svd(sparse_matrix, k)
each_results_df = pd.DataFrame(np.matmul(item_factors, user_factors),
columns=sparse_matrix.columns.values, index=sparse_matrix.index.values)
each_results_df = each_results_df.transpose()
result_df, _ = evaluate(test_df, each_results_df)
each_rmse = sqrt(mean_squared_error(result_df['actual_rating'].values, result_df['pred_rating'].values))
final_df.loc[k]['rmse'] = each_rmse
return final_df
# + colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["9f2261beb5554ecea4ddf682687031b3", "af44f8703ee34ecf94b95c589e48eefd", "<KEY>", "19687d8753594e248318f65a97a57bc3", "<KEY>", "bbb1baa983e74cccb872e242ee5cc7e3", "6097db612d0b4bdcbe1ffa5222d4b648", "<KEY>", "<KEY>", "2b6825c2d8a443c7a82deb83678d6941", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "d9b708b09da54463bc49e2da83a746ea", "<KEY>", "b1811d9fda30455f9daa571e5d52818f", "<KEY>", "9546b15ee63a40f6b8b168546048297b", "25323d9470f044199b341567ab690b37", "879275d9752342cea2aec47ab179909b", "3dada31060894315b2aa3839591ba0c7", "<KEY>", "916dd0a259f249d5ac7eeee3e8b5728f", "<KEY>", "<KEY>", "<KEY>", "b4d91d54170344bebd77c7c2a2d420c0", "3762877532834354a7c11355623ca04f", "<KEY>", "509ac50d761d482da10d805999ea858d", "f3f67009ec6f44968eda023dccc77af7", "e260f7021ee94e36847139aa40f4a282", "b4febc8017ae40e3a6a92a33b969ac08", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "2eff585a75644fa0b559ab12849048ec", "<KEY>", "<KEY>", "<KEY>", "e53b4b84813746ef98eadebdb473a0a7", "e187224da9844b399fe865020a8d447f", "<KEY>", "<KEY>", "c6389882e2e4489e8a2618d71c01855e", "3431204a2a8e4d2aac642c362dca268a", "0a160a4dc69747d39e64e28bb1447d1a", "<KEY>", "bcfa29cd8e254781a5cabe0ccb5dcaa1", "9c662afac26045d2a626e1c9abada10e", "<KEY>", "422a91d63dbb4bb9b673bace33448ad0", "f069c42378de4ecb98092841ec1dee78", "fae634a5c7ec42d9a138060271fe7746", "<KEY>", "<KEY>", "<KEY>", "ecb5ec7e489247799e2f4b45a0b9955a", "<KEY>", "7268ecbef67c45a3b954912e16c956c6", "<KEY>", "0ba4d7b1797d4c3e8d8cd3ce9469dce8", "<KEY>", "4b486a6f0afc4966a1cb09c8c1a6dcdf", "2a60f3540ae942f48776f5d9f778201f", "2c1deba5c58849948ce49530f7a8bebf", "3e044b96f4884cd0b59e5a82d9eaa4df", "<KEY>", "<KEY>", "162b809322934a9681bf66feb9416976", "569278144c0c44a4b8fca6c02ad3f070", "1fc4eeed373347d2b80dedb6056c1408", "3e46083b83af4963b67a99f0359adc1f", "<KEY>", "1f4b870e648844bb8ad7e470553781c0", "<KEY>", "<KEY>", "78c1ad5d6e614172a97b92f444305a1f", "60487591f8604dfdb446ebd0a8c546e5", "<KEY>", "4e3bf21eee214eafa30f353e393ad9f2", "<KEY>", "<KEY>", "aae7d065261a4e53bdd77ad199a01c7c", "7457ea311c354e889c871af7364915c3", "ebd47703faf74eea8ccfb95ea48723a9", "ecfa59ebef454f7a83cf8ccd829863e7", "<KEY>", "<KEY>", "f5854ed0f40644068a8c7855e00caa35", "<KEY>", "<KEY>", "799d40e0b996448ba092d030f204a208", "<KEY>", "34078a3d3bd4474e951715c94128f5fe", "<KEY>", "<KEY>", "7314263f6d3941648ba2f70866de5667", "<KEY>", "<KEY>", "c42debd917a84b93a6bd6a1b6e2a6c6f", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "1176ba7b14be4a57ae059f79f5aa6762", "880f91414f1e44ff93cb007dc7e75880", "<KEY>", "6b175a8856934f9abef8b6e7dabdda3e", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "0b2e504d62384395b29cadab373c7b35", "9e839ddab95940eb80a4e890a42d61b6", "a2343787c531465e976f4521ebe55324", "9292a541017e470e957d0d4771ab2e8d", "2b11e315e2ff4700b1f09365012a6953", "<KEY>", "be97d579e1b147f19b634e8caa6021d0", "d0f9d16b5e9349faa9f0a98ed424835b", "8b87d9d55f4444b585ce70d3b16a4347", "198f1ff5793a4789b6212add9892a85b", "<KEY>", "05df3a3fedba4fa181e070af3b00498d", "<KEY>", "507e0bb06c1641d7ac537c9a7d0252f6", "<KEY>", "ddf2ff50e8704b52bba2c4c79276490e", "4e3e20abd00340d491717b9a3104920a", "<KEY>", "a62c06146bdf4ce8829de6207d74d14b", "b5af9796422f470ab20518fc2d1acdea", "<KEY>", "e7a0ec8aa90f4f59ac02ac2ef20265f5", "<KEY>", "2488a09b52b74c47979ed75d7555ce03", "ba69b356422948c19fff0f7488154985", "5f0e73329ce944ada607603da222054f", "<KEY>", "db8a52f0dbdb44c5a444f5c65e443f9e", "bff39f70ff43404da7d3bd593ec86dbc", "<KEY>", "f9145f3ad4f44bd686412c3e274ee74e", "2d3fd848b3de485689d139d35a29362a", "8e319734fed84a0d84ca74d55863b9fe", "013e9ef7b1ce44c38abbead2ef661462", "75eaf8f1622146c9924a670af39fce2a", "86cef37e0d0c4ee5b7d2f9e7a41cde7d", "376d3dd14e56422db993bb34e0f26430", "<KEY>", "<KEY>", "8a199f8332b64396992e0c1a63ab46fa", "<KEY>", "<KEY>", "<KEY>", "d6b18ef0e986460e8ca80a2046680fbc", "<KEY>", "66af146154a9495b9ff265a6d8b74669", "057fea2b927a450f9e1e9f6bd546210b", "<KEY>", "484d4aef02164959af14d97975e9849b", "e8467e80f6ef4ffcafb25a88156f9c88", "<KEY>", "dafed45baf934a9ba2a431a860c09434", "288d8a4d5ea94b15974fef1b7542a417", "<KEY>", "<KEY>", "fc98c1153473454a9471998fcd2f05ab", "<KEY>"]} id="13awa9gU-i3C" outputId="da801c47-bcab-4bd3-f409-0525585e402b"
res = find_best_k(sparse_matrix_withmovie, 200)
# + colab={"base_uri": "https://localhost:8080/"} id="Oy0DFyBS-kct" outputId="7fa84118-cc30-429b-e01e-ed332ca0bba1"
res.sort_values(by = 'rmse').index[0]
# + [markdown] id="eVpb5DZbA_IJ"
# ### Visualize
# + colab={"base_uri": "https://localhost:8080/", "height": 305} id="fSCa--zmA9GG" outputId="34a07ae9-4fc4-4a9e-8457-49ae1c84b87e"
plt.plot(res.index, res.rmse)
plt.title("Find best optimized k for Matrix Factorization", fontsize=20)
plt.xlabel('number of k', fontsize=15)
plt.ylabel('rmse', fontsize=15)
plt.show()
# + [markdown] id="qzH8ciEwBQhL"
# ## Matrix Factorization with Simple Python module
#
# - https://pypi.org/project/matrix-factorization/
# + colab={"base_uri": "https://localhost:8080/"} id="al7LWixBBQ9d" outputId="5d87a082-8121-4bca-ad74-b1cbd09ceb58"
# !pip install -q matrix-factorization
# + id="jZon76ViBV6d"
from matrix_factorization import BaselineModel, KernelMF, train_update_test_split
# + colab={"base_uri": "https://localhost:8080/"} id="HlG8PrgTBYKZ" outputId="98e78b72-2b2c-43d1-a24a-7a7980c50f88"
path = '/content/drive/MyDrive/data/movielens'
ratings_df = pd.read_csv(os.path.join(path, 'ratings.csv'), encoding='utf-8')
print(ratings_df.shape)
print(ratings_df.head())
# + colab={"base_uri": "https://localhost:8080/"} id="XE_ubcyLBbaW" outputId="b1c29707-a419-4a64-d04b-9b3c7b9204e2"
train_df, test_df = train_test_split(ratings_df, test_size=0.2, random_state=1234)
print(train_df.shape)
print(test_df.shape)
# + colab={"base_uri": "https://localhost:8080/"} id="i9ksDzJOBc4J" outputId="94d80ecc-7e57-42b2-eebe-2d6b8e4efea4"
train_df.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="PoAaP-sGBemk" outputId="81bc4644-a7c1-4ab0-9d01-60ae8229bb7b"
new_train_df = train_df
new_train_df = new_train_df.rename(columns={"userId": "user_id", "movieId": "item_id"})
new_train_df.head()
# + id="xG11h6QyBfxk"
(
X_train_initial,
y_train_initial,
X_train_update,
y_train_update,
X_test_update,
y_test_update,
) = train_update_test_split(new_train_df, frac_new_users=0.2)
# + colab={"base_uri": "https://localhost:8080/"} id="yN-4YgS5BhW0" outputId="65e0b34b-7222-4b15-c753-ff5f9e8b6426"
# Initial training
matrix_fact = KernelMF(n_epochs=20, n_factors=100, verbose=1, lr=0.001, reg=0.005)
matrix_fact.fit(X_train_initial, y_train_initial)
# + colab={"base_uri": "https://localhost:8080/"} id="GHYx-rClBi-k" outputId="a59ad1d6-749b-4026-b0ac-dd35470ea0e4"
# Update model with new users
matrix_fact.update_users(
X_train_update, y_train_update, lr=0.001, n_epochs=20, verbose=1
)
# + colab={"base_uri": "https://localhost:8080/"} id="sTdNCxGRBlP5" outputId="68852b4c-d552-4eec-c116-06224d28e251"
pred = matrix_fact.predict(X_test_update)
rmse = mean_squared_error(y_test_update, pred, squared=False)
print(f"\nTest RMSE: {rmse:.4f}")
# + colab={"base_uri": "https://localhost:8080/", "height": 363} id="vCWWGJwjBnB6" outputId="25988d37-4ae0-4f06-a6c4-6200eeb122dc"
# Get recommendations
user = 200
items_known = X_train_initial.query("user_id == @user")["item_id"]
result = matrix_fact.recommend(user= user, items_known=items_known)
# users = [2, 3, 4]
# for user in users:
# items_known = X_train_initial.query("user_id == @user")["item_id"]
# temp = matrix_fact.recommend(user=user, items_known=items_known)
# result = pd.concat([result, temp], axis = 1)
result
# + id="HY9kkdZgBoh_"
# from matrix_factorization import BaselineModel, KernelMF, train_update_test_split
# def find_best_k(sparse_matrix, maximum_k=100):
# print("\nFind best optimized k for Matrix Factorization")
# k_candidates = np.arange(50, maximum_k, 10)
# final_df = pd.DataFrame(columns=['rmse'], index=k_candidates)
# for k in tqdm(k_candidates):
# item_factors, user_factors = get_svd(sparse_matrix, k)
# each_results_df = pd.DataFrame(np.matmul(item_factors, user_factors),
# columns=sparse_matrix.columns.values, index=sparse_matrix.index.values)
# each_results_df = each_results_df.transpose()
# result_df, _ = evaluate(test_df, each_results_df)
# each_rmse = sqrt(mean_squared_error(result_df['actual_rating'].values, result_df['pred_rating'].values))
# final_df.loc[k]['rmse'] = each_rmse
# return final_df
# res = find_best_k(sparse_matrix_withmovie, 200)
# best_k = int(res['rmse'].sort_values().index[0])
# print(best_k)
# path = '/content/drive/MyDrive/data/movielens'
# ratings_df = pd.read_csv(os.path.join(path, 'ratings.csv'), encoding='utf-8')
# train_df, test_df = train_test_split(ratings_df, test_size=0.2, random_state=1234)
# new_train_df = train_df
# new_train_df = new_train_df.rename(columns={"userId": "user_id", "movieId": "item_id"})
# (
# X_train_initial,
# y_train_initial,
# X_train_update,
# y_train_update,
# X_test_update,
# y_test_update,
# ) = train_update_test_split(new_train_df, frac_new_users=0.2)
# # Initial training
# matrix_fact = KernelMF(n_epochs=20, n_factors=best_k, verbose=0, lr=0.001, reg=0.005)
# matrix_fact.fit(X_train_initial, y_train_initial)
# # Update model with new users
# matrix_fact.update_users(
# X_train_update, y_train_update, lr=0.001, n_epochs=20, verbose=1
# )
# # Get recommendations
# users = [x for x in range(2, 611)]
# items_known = X_train_initial.query("user_id == @user")["item_id"]
# result = matrix_fact.recommend(user=1, items_known=items_known)
# for user in tqdm(users):
# items_known = X_train_initial.query("user_id == @user")["item_id"]
# temp = matrix_fact.recommend(user=user, items_known=items_known)
# result = pd.concat([result, temp], axis = 1)
# + id="w_I1Vbu1TmIA"
# result.to_csv('./drive/MyDrive/data/trial_result.csv')
# + [markdown] id="ckz4ksd0XZ9e"
# ### SGD (Stochastic Gradient Descent)
# + colab={"base_uri": "https://localhost:8080/"} id="OBg_HstcXZp5" outputId="d439f662-e875-4477-9768-92a0c5cc49bc"
baseline_model = BaselineModel(method='sgd', n_epochs = 20, reg = 0.005, lr = 0.01, verbose = 1)
baseline_model.fit(X_train_initial, y_train_initial)
pred = baseline_model.predict(X_test_update)
rmse = mean_squared_error(y_test_update, pred, squared = False)
print(f'\nTest RMSE: {rmse:.4f}')
# + colab={"base_uri": "https://localhost:8080/"} id="78iza6sLYEad" outputId="9aa0026e-9b31-4a91-97d8-bc88d6a6e880"
# %%time
baseline_model.update_users(X_train_update, y_train_update, n_epochs = 20, lr = 0.001, verbose = 1)
pred = baseline_model.predict(X_test_update)
rmse = mean_squared_error(y_test_update, pred, squared = False)
print(f'\nTest RMSE: {rmse:.4f}')
# + [markdown] id="8rGZjJ_lYZNG"
# ### ALS (Alternating Lease Squares)
# + colab={"base_uri": "https://localhost:8080/"} id="1FDsQ_CEYYjc" outputId="8cac51c3-9b3f-4026-9eaa-4d5c4d785f1e"
baseline_model = BaselineModel(method='als', n_epochs=20, reg=0.5, verbose=1)
baseline_model.fit(X_train_initial, y_train_initial)
pred = baseline_model.predict(X_test_update)
rmse = mean_squared_error(y_test_update, pred, squared=False)
print(f'\nTest RMSE: {rmse:.4f}')
# + colab={"base_uri": "https://localhost:8080/"} id="sbbCbHJ4bDAr" outputId="312c0bc9-b150-4b9c-dc62-9d0e0fd374b1"
# %%time
baseline_model.update_users(X_train_update, y_train_update, n_epochs = 20, lr = 0.001, verbose = 1)
pred = baseline_model.predict(X_test_update)
rmse = mean_squared_error(y_test_update, pred, squared=False)
print(f'\nTest RMSE: {rmse:.4f}')
# + id="CCsbZsh_bIZ6"
| colab/fastcampus/Matrix_Factorization_Trial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from IPython.html import widgets
from IPython.utils.traitlets import link
from IPython.display import display
# List of widgets:
dir(widgets)
# Display and link the values of two widgets. You can use the `link` method in the `traitlets` namespace. Syntax `link((widget1, 'value'), (widget2, 'value'))
# Solution:
# # %load sln/1.py
from IPython.html import widgets
from IPython.utils.traitlets import link
from IPython.display import display
slider = widgets.FloatSlider()
text = widgets.FloatText()
widgetlink = link((slider, 'value'), (text, 'value'))
display(slider, text)
| python/demo1/scipy-advanced-tutorial-master/Part2/Exercise 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Step 15: Extend-KG-with-more-keywords
# 
# |**[Overview](#Overview)** |**[Prior-steps](#Prior-steps)**|**[How-to-use](#How-to-use)**|**[Next-steps](#Next-steps)**|**[Postscript](#Postscript)**|**[Acknowledgements](#Acknowledgments)|
#
# # Overview
#
# In Step 8, we put forward a first representation of the business domain reflected by the library, based on the relationships between the library keywords.
#
#
# 
# By now, we have also looked at two other (optional) representations of the business domain:
#
# - step 13: a KG based on the topic modelling
#
# - Step 14 : a comparator KG based on best practice - in this case a key paper.
#
# We also have (optionally) generated a longer list of keywords, by using the code in Step 6.
#
# A full list is below, but they include keywords like control, report, regulation.
#
# We can now extend and adapt our high level KG, as shown above, by selecting the most useful of these new keywords, and adding them into the graph.
#
# Since we may also have Step 13 and 14 graphs to compare with, we are less likely to go up a blind alley.
#
# # Installation
#
# At this point you will need Neo4j (or you can do this in YEd or Gephi).
#
# # Prior-steps
# Step 5 which provides records for the whole library
# Step 6 to provide a longer list of keywords
# Step 7 which provided an original KG
# Step 8 which did a first refinement of the KG to reflect the business domain.
#
# It is desirable that you have done Step 12 and 13, to provide an alternative representation of the business domain.
#
# # How-to-use
#
# ## Open Neo4j
#
#hide
#Use either Neo4j Desktop, or create a Neo4j sandbox.
#Change security settings for APOC. Where you do this is different between v 3.5 and 4
apoc.import.file.enabled=true
apoc.export.file.enabled=true
#See https://neo4j.com/docs/labs/apoc/current/import/graphml
# # Open the KG created in Step 7, and then refined in Step 8.
# This can be opened from
# CALL apoc.import.graphml("https://raw.githubusercontent.com/lawrencerowland/Data-Model-for-Project-Frameworks/master/Project-frameworks-by-using-NLP-with-Python-libraries/Interim-results/Keyword-graph-2.graphml", {readLabels: true,storeNodeIds: true})
# Bring in the keywords from Step 7. These will be the keywords lower on the list, that we discarded when we moved to bring the top connected keywords into a KG. If necessary, it is only one line of code (in Step 7) that can be re-run to generate those lower level keywords - with a higher parameter setting for the number of keywords returned.
import pandas as pd
import os
directory= "/Users/lawrence/Documents/GitHub/Data-Model-for-Project-Frameworks/Project-frameworks-by-using-NLP-with-Python-libraries/Interim-results/"
# Change directory location for your particular set-up.
df = pd. read_csv(directory+'Even_more_Keywords_for_whole_corpus.csv')
# Here are the more interesting keywords in here, that havent already arisen in either the Keyword KG or the Topic KG.
# In importance (score) order:
# | High score | Medium Score | Lower score |
# | ------------- | ------------ | ----------- |
# | control, | activity | material |
# | report | document | standard |
# | regulation | functions | effect |
# | System, | time | perform |
# | specification | engineering | structure |
# | communication | product | follows |
# | community | identify | model |
# | plan | | |
# One selects good keywords from above, and places them in the knowledge graph, with lines of Cypher code.
#
# In our example, we have used all the high score keywords. We are getting diminishing returns, especially as the remaining keywords are all standard within most project frameworks, and are not nuclear specific.
# ## Enter cypher code to reflect the relationships
# For the current example, the code is saved in Interim results as 'Step-15-Extended-Keyword-Graph.cypher.txt'
# This could be entered into a fresh Neo4j database to replicate these results.
# ## Save the cypher code for records.
#
# CALL apoc.export.cypher.all("all.cypher", {
# format: "plain",
# useOptimizations: {type: "NONE"}
# })
# YIELD file, batches, source, format, nodes, relationships, properties
# RETURN file, batches, source, format, nodes, relationships, properties;
The
# +
It may be that the higher score ones should be first considered as grouping terms, suitable for labels.
Enhancements options:
- One could take appropriate KGs from KBPEDIA or similar (see xx)
| Project-frameworks-by-using-NLP-with-Python-libraries/Jupyter-notebooks/Step-15-Extend-KG-with-more-keywords.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import nltk
from nltk.corpus import state_union
import pandas as pd
import os
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import TruncatedSVD
from sklearn.decomposition import NMF
#from sklearn.metrics.pairwise import cosine_similarity
import matplotlib.pyplot as plt
# %matplotlib inline
import pickle
from sklearn.feature_extraction.text import TfidfVectorizer
# +
#path = 'state-of-the-union-corpus-1989-2017'
#path = 'c:\\Users\\gabri\\AppData\\Roaming\\nltk_data\\corpora\\state_union' # file location
path = 'c:\\Users\\gabri\\OneDrive\\Documents\\Metis_NLP_Kaggle\\Speeches\\sotu' # file location
dirs = os.listdir(path) # reads all the files in that directory
print (len(dirs)) #tell how many files
# -
dirs[:] # file names
# # Selecting the first speech to see what we need to clean.
filename = os.path.join(path, dirs[0]) # dirs is a list, and we are going to study the first element dirs[0]
text_file = open(filename, 'r') #open the first file dirs[0]
lines = text_file.read() # read the file
lines # print what is in the file
lines.replace('\n', ' ') # remove the \n symbols by replacing with an empty space
#print (lines)
sotu_data = [] #create an empty list
sotu_dict = {} # create an empty dictionary so that we can use file names to list the speeches by name
# # Putting all the speeches into a list, after cleaning them
# +
#The filter() function returns an iterator were the items are filtered
#through a function to test if the item is accepted or not.
# str.isalpha : checks if it is an alpha character.
# lower() : transform everything to lower case
# split() : Split a string into a list where each word is a list item
# loop over all the files:
for i in range(len(dirs)): # loop on all the speeches, dirs is the list of speeches
filename = os.path.join(path, dirs[i]) # location of the speeches
text_file = open(filename, 'r') # read the speeches
lines = text_file.read() #read the speeches
lines = lines.replace('\n', ' ') #replace \n by an empty string
# tranform the speeches in lower cases, split them into a list and then filter to accept only alpha characters
# finally it joins the words with an empty space
clean_lines = ' '.join(filter(str.isalpha, lines.lower().split()))
#print(clean_lines)
sotu_data.append(clean_lines) # append the clean speeches to the sotu_data list.
sotu_dict[filename] = clean_lines # store in dict so we can access clean_lines by filename.
# -
sotu_data[10] #11th speech/element
# +
speech_name = 'Wilson_1919.txt'
sotu_dict[path + '\\' + speech_name]
# -
# # Count Vectorize
# +
#from notebook
#vectorizer = CountVectorizer(stop_words='english') #remove stop words: a, the, and, etc.
vectorizer = TfidfVectorizer(stop_words='english', max_df = 0.42, min_df = 0.01) #remove stop words: a, the, and, etc.
doc_word = vectorizer.fit_transform(sotu_data) #transform into sparse matrix (0, 1, 2, etc. for instance(s) in document)
pairwise_similarity = doc_word * doc_word.T
doc_word.shape # 228 = number of documents, 20932 = # of unique words)
#pairwise_similarity.toarray()
# -
# # Compare how similar speeches are to one another
# +
df_similarity = pd.DataFrame(pairwise_similarity.toarray(), index = dirs, columns = dirs)
df_similarity.head() #similarity dataframe, compares each document to eachother
# -
df_similarity.to_pickle("df_similarity.pkl") #pickle file
# +
df_similarity['Speech_str'] = dirs #matrix comparing speech similarity
df_similarity['Year'] =df_similarity['Speech_str'].replace('[^0-9]', '', regex=True)
df_similarity.drop(['Speech_str'], axis=1)
df_similarity = df_similarity.sort_values(by=['Year'])
df_similarity.head()
# +
plt.subplots(2, 2, figsize=(30, 15), sharex=True) #4 speeches similarity
#
plt.rcParams.update({'font.size': 20})
plt.subplot(2, 2, 1)
plt.plot(df_similarity['Year'], df_similarity['Adams_1797.txt'])
plt.title("Similarity for Adams 1797 speech")
plt.xlabel("Year")
plt.ylabel("Similarity")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(2, 2, 2)
plt.plot(df_similarity['Year'], df_similarity['Roosevelt_1945.txt'])
plt.title("Similarity for Roosevelt 1945 speech")
plt.xlabel("Year")
plt.ylabel("Similarity")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(2, 2, 3)
plt.plot(df_similarity['Year'], df_similarity['Obama_2014.txt'])
plt.title("Similarity for Obama 2014 speech")
plt.xlabel("Year")
plt.ylabel("Similarity")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(2, 2, 4)
plt.plot(df_similarity['Year'], df_similarity['Trump_2018.txt'])
plt.title("Similarity for Trump 2018 speech")
plt.xlabel("Year")
plt.ylabel("Similarity")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplots_adjust(top=0.90, bottom=0.02, wspace=0.30, hspace=0.3)
#sns.set()
plt.show()
# +
#(sotu_dict.keys())
#for i in range(0,len(dirs)):
# print(dirs[i])
# -
# # Transforming the doc into a dataframe
# +
# We have to convert `.toarray()` because the vectorizer returns a sparse matrix.
# For a big corpus, we would skip the dataframe and keep the output sparse.
#pd.DataFrame(doc_word.toarray(), index=sotu_data, columns=vectorizer.get_feature_names()).head(10) #doc_word.toarray() makes 7x19 table, otherwise it would be
#represented in 2 columns
#from notebook
pd.DataFrame(doc_word.toarray(), index=dirs, columns=vectorizer.get_feature_names()).head(95) #doc_word.toarray() makes 7x19 table, otherwise it would be
#represented in 2 columns
# -
# # Topic Modeling using nmf
n_topics = 8 # number of topics
nmf_model = NMF(n_topics) # create an object
doc_topic = nmf_model.fit_transform(doc_word) #break into 10 components like SVD
topic_word = pd.DataFrame(nmf_model.components_.round(3), #,"component_9","component_10","component_11","component_12"
index = ["component_1","component_2","component_3","component_4","component_5","component_6","component_7","component_8"],
columns = vectorizer.get_feature_names()) #8 components in final draft
topic_word
#https://stackoverflow.com/questions/16486252/is-it-possible-to-use-argsort-in-descending-order/16486299
#list the top words for each Component:
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_): # loop over the model components
print("Component_" + "%d:" % topic_idx ) # print the component
# join the top words by an empty space
# argsort : sorts the list in increasing order, meaning the top are the last words
# then select the top words
# -1 loops backwards
# reading from the tail to find the largest elements
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
# # Top 15 words in each component
n_top_words = 15
feature_names = vectorizer.get_feature_names()
print_top_words(nmf_model, feature_names, n_top_words)
# +
#Component x Speech
H = pd.DataFrame(doc_topic.round(5), index=dirs, #,"component_9","component_10"
columns = ["component_1","component_2", "component_3","component_4","component_5","component_6","component_7","component_8"])
# -
H.head()
H.iloc[30:35]
H.iloc[60:70]
H.iloc[225:230]
# # Use NMF to plot top 15 words for each of 8 components
# def plot_top_words(model, feature_names, n_top_words, title):
# fig, axes = plt.subplots(2, 4, figsize=(30, 15), sharex=True)
# axes = axes.flatten()
# for topic_idx, topic in enumerate(model.components_):
# top_features_ind = topic.argsort()[:-n_top_words - 1:-1]
# top_features = [feature_names[i] for i in top_features_ind]
# weights = topic[top_features_ind]
#
# ax = axes[topic_idx]
# ax.barh(top_features, weights, height=0.7)
# ax.set_title(f'Topic {topic_idx +1}',
# fontdict={'fontsize': 30})
# ax.invert_yaxis()
# ax.tick_params(axis='both', which='major', labelsize=20)
# for i in 'top right left'.split():
# ax.spines[i].set_visible(False)
# fig.suptitle(title, fontsize=40)
#
# plt.subplots_adjust(top=0.90, bottom=0.05, wspace=0.90, hspace=0.3)
# plt.show()
#
n_top_words = 12
feature_names = vectorizer.get_feature_names()
plot_top_words(nmf_model, feature_names, n_top_words,
'Topics in NMF model') #title
# # Sort speeches Chronologically
H1 = H
H1['Speech_str'] = dirs
H1['Year'] = H1['Speech_str'].replace('[^0-9]', '', regex=True)
H1 = H1.sort_values(by = ['Year'])
H1.to_csv("Data_H1.csv", index = False) #Save chronologically sorted speeches in this csv
H1.head()
H1.to_pickle("H1.pkl") #pickle chronological csv file
# # Plots of Components over Time (check Powerpoint/Readme for more insights)
# +
plt.subplots(4, 2, figsize=(30, 15), sharex=True)
plt.rcParams.update({'font.size': 20})
plt.subplot(4, 2, 1)
plt.plot(H1['Year'], H1['component_1'] ) #Label axis and titles for all plots
plt.title("19th Century Economic Terms")
plt.xlabel("Year")
plt.ylabel("Component_1")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(4, 2, 2)
plt.plot(H1['Year'], H1['component_2'])
plt.title("Modern Economic Language")
plt.xlabel("Year")
plt.ylabel("Component_2")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(4, 2, 3)
plt.plot(H1['Year'], H1['component_3'])
plt.title("Growth of US Gov't & Programs")
plt.xlabel("Year")
plt.ylabel("Component_3")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(4, 2, 4)
plt.plot(H1['Year'], H1['component_4'])
plt.title("Early Foreign Policy & War")
plt.xlabel("Year")
plt.ylabel("Component_4")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(4, 2, 5)
plt.plot(H1['Year'], H1['component_5'])
plt.title("Progressive Era & Roaring 20s")
plt.xlabel("Year")
plt.ylabel("Component_5")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(4, 2, 6)
plt.plot(H1['Year'], H1['component_6'])
plt.title("Before, During, After the Civil War")
plt.xlabel("Year")
plt.ylabel("Component_6")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(4, 2, 7)
plt.plot(H1['Year'], H1['component_7'])
plt.title("World War & Cold War")
plt.xlabel("Year")
plt.ylabel("Component_7")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplot(4, 2, 8)
plt.plot(H1['Year'], H1['component_8'])
plt.title("Iraq War & Terrorism")
plt.xlabel("Year")
plt.ylabel("Component_8")
plt.axhline(y=0.0, color='k', linestyle='-')
plt.xticks(['1800', '1850','1900','1950','2000']) # Set label locations.
plt.subplots_adjust(top=0.90, bottom=0.02, wspace=0.30, hspace=0.4)
plt.show()
# -
# ## Component 1: 19th Century Economics
H1.iloc[75:85] #Starts 1831. Peak starts 1868 (apex=1894), Nosedive in 1901 w/ Teddy. 4 Yr resurgence under Taft (1909-1912)
# ## Component 2: Modern Economic Language
H1.iloc[205:215] #1960s: Starts under JFK in 1961, peaks w/ Clinton, dips post 9/11 Bush, resurgence under Obama
# ## Component 3: Growth of US Government and Federal Programs
H1.iloc[155:165] #1921, 1929-1935. Big peak in 1946-1950 (1951 Cold War). 1954-1961 Eisenhower. Low after Reagan Revolution (1984)
# ## Component 4: Early Foreign Policy and War
H1.iloc[30:40] #Highest from 1790-1830, Washington to Jackson
# ## Component 5: Progressive Era, Roaring 20s
H1.iloc[115:125] #Peaks in 1900-1930.Especially Teddy Roosevelt. Dip around WW1
# ## Component 6: War Before, During, and After the Civil War
H1.iloc[70:80] #Starts w/ Jackson 1829, Peaks w/ Mexican-American War (1846-1848). Drops 60% w/ Lincoln. Peak ends w/ Johnson 1868. Remains pretty low after 1876 (Reconstruction ends)
# ## Component 7: World Wars and Korean War
H1.iloc[155:165] #Minor Peak around WW1. Masssive spike a response of Cold War, Korean War (1951). Eisenhower drops (except 1960 U2). <NAME>. Peaks again 1980 (<NAME> foreign policy crises)
# ## Component 8: Iraq War and Terrorism
H1.iloc[210:220] #Minor peak w/ Bush 1990. BIG peak w/ Bush 2002. Ends w/ Obama 2009. Resurgence in 2016/18 (ISIS?)
# # Word Cloud
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
speech_name = 'Lincoln_1864.txt'
sotu_dict[path + '\\' + speech_name]
#example = sotu_data[0]
example = sotu_dict[path + '\\' + speech_name]
wordcloud = WordCloud(max_words=100).generate(example)
plt.title("WordCloud of " + speech_name)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
| state_of_union_main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/spyderweb-abdul/Deletion-Detection-in-Unstructured-Data/blob/main/edge_reconstruction_main.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="RRgFQaGhJTZ_"
import os, sys
#from google.colab import drive
#drive.mount('/content/drive')
#nb_path = '/content/libraries'
sys.path.append('/content/drive/My Drive/Colab Notebooks/VGRNN/')
sys.path.append('/content/drive/My Drive/Colab Notebooks/')
#os.symlink('/content/drive/My Drive/Colab Notebooks', nb_path)
#sys.path.insert(0,nb_path)
# + id="iB5xXwYNKQeS" colab={"base_uri": "https://localhost:8080/"} outputId="abf3552b-0268-4284-f0ab-9418f01914a8"
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# #!python --version #3.7.10
import io
import math
import numpy as np
import torch
import torch.nn as nn
import torch.utils
import torch.utils.data
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
from scipy.ndimage import rotate
from torch.distributions.uniform import Uniform
from torch.distributions.normal import Normal
#from sklearn.datasets import fetch_mldata
# from torch_geometric import nn as tgnn
from input_data import load_data
from preprocessing import preprocess_graph, construct_feed_dict, sparse_to_tuple, mask_test_edges
import scipy.sparse as sp
from scipy.linalg import block_diag
from torch.nn.parameter import Parameter
from torch.nn.modules.module import Module
import tarfile
import torch.nn.functional as F
import copy
import time
#print(torch.__version__) #1.9.0+cu102
#print(torch.version.cuda) #10.2
# #!pip uninstall torch-scatter torch-sparse torch-geometric
# !pip install -q torch-scatter -f https://pytorch-geometric.com/whl/torch-1.6.0+cu102.html
# !pip install -q torch-sparse -f https://pytorch-geometric.com/whl/torch-1.6.0+cu102.html
# !pip install -q torch-geometric
import torch_scatter
from torch_scatter import scatter_mean, scatter_max, scatter_add
from torch_geometric.utils import remove_self_loops, add_self_loops, degree
#from torch_geometric.datasets import Planetoid
import networkx as nx
import scipy.io as sio
import inspect
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score, roc_auc_score, average_precision_score
from sklearn.manifold import TSNE
import copy
import pickle
# !pip install sparse
import sparse
import time
import datetime
from datetime import timedelta
import pandas as pd
from torch.utils.tensorboard import SummaryWriter
from warnings import simplefilter
# ignore all future warnings
simplefilter(action='ignore', category=FutureWarning)
# + id="6ryNuuLkKjAv"
seed = 3
np.random.seed(seed)
# utility functions
def uniform(size, tensor):
stdv = 1.0 / math.sqrt(size)
if tensor is not None:
tensor.data.uniform_(-stdv, stdv)
def glorot(tensor):
stdv = math.sqrt(6.0 / (tensor.size(0) + tensor.size(1)))
if tensor is not None:
tensor.data.uniform_(-stdv, stdv)
def zeros(tensor):
if tensor is not None:
tensor.data.fill_(0)
def ones(tensor):
if tensor is not None:
tensor.data.fill_(1)
def reset(nn):
def _reset(item):
if hasattr(item, 'reset_parameters'):
item.reset_parameters()
if nn is not None:
if hasattr(nn, 'children') and len(list(nn.children())) > 0:
for item in nn.children():
_reset(item)
else:
_reset(nn)
def tuple_to_array(lot):
out = np.array(list(lot[0]))
for i in range(1, len(lot)):
out = np.vstack((out, np.array(list(lot[i]))))
return out
# + id="NO-RKqWpKt3Q"
# masking functions
def mask_edges_det(adjs_list):
adj_train_l, train_edges_l, val_edges_l = [], [], []
val_edges_false_l, test_edges_l, test_edges_false_l = [], [], []
edges_list = []
for i in range(0, len(adjs_list)):
# Function to build test set with 10% positive links
# NOTE: Splits are randomized and results might slightly deviate from reported numbers in the paper.
adj = adjs_list[i]
# Remove diagonal elements
adj = adj - sp.dia_matrix((adj.diagonal()[np.newaxis, :], [0]), shape=adj.shape)
adj.eliminate_zeros()
# Check that diag is zero:
assert np.diag(adj.todense()).sum() == 0
#get the upper trianglar portion of the matrix.
adj_triu = sp.triu(adj)
#convert the matrix into a tuple of the format: ((([1, 10]), ([1, 1, 1,..., 1, 1, 1])),...)
adj_tuple = sparse_to_tuple(adj_triu)
#get only the 0 index of the tuple. Returns as list: [[1 10],[1 12],[1 4],[20 25]]
#shape: (n, 2)
edges = adj_tuple[0]
#convert the adj sparse matrix to tuple and return the result of the 0 index of the tuple
edges_all = sparse_to_tuple(adj)[0]
#get the number of test set: row number(n)/10
num_test = int(np.floor(edges.shape[0] / 10.))
#get the number of the validation set: row number(n)/20
num_val = int(np.floor(edges.shape[0] / 20.))
#list numbers of edge index based on the row axis of the edges
#all_edge_idx = range(edges.shape[0])
all_edge_idx = list(range(edges.shape[0]))
#randomize the result
np.random.shuffle(all_edge_idx)
#get validation edge index from the randomized edge list. Extract only numbers equal to num_val
val_edge_idx = all_edge_idx[:num_val]
#get test edge index from the randomized edge list.
#Extract only numbers equal to [num_val : (num_val + num_test)]
test_edge_idx = all_edge_idx[num_val:(num_val + num_test)]
#get the main test edge set by extracting values fom the edge list indexed by the test_edge_idx list
test_edges = edges[test_edge_idx]
#get the main validation edge set by extracting values fom the edge list indexed by the test_edge_idx list
val_edges = edges[val_edge_idx]
#delete the stacked test and validation edge set (along the axis=0) from the list of edges.
#This will be the training set
# [[162 165], [162 169], [162 172], [171 174]]
train_edges = np.delete(edges, np.hstack([test_edge_idx, val_edge_idx]), axis=0)
#append the list of main edges
edges_list.append(edges)
def ismember(a, b, tol=5):
#Test whether all array elements along a given axis evaluate to True. (np.all)
rows_close = np.all(np.round(a - b[:, None], tol) == 0, axis=-1)
return np.any(rows_close) #np.any evaluate whether any elements evaluate to True
#get false edge test set
test_edges_false = []
#Do while test_egde_false list length is still less than the tst_edge list
while len(test_edges_false) < len(test_edges):
#get random integers between 0 (lower) and the row size of the adj (higher)
idx_i = np.random.randint(0, adj.shape[0])
idx_j = np.random.randint(0, adj.shape[0])
#if right and left values are equal, go back to the top loop
if idx_i == idx_j:
continue
#if the tuple of the 2 values are part of edges_all (returns a bool), back to top
if ismember([idx_i, idx_j], edges_all):
continue
#if the empty test_edges_false list is not None, check the conditions
if test_edges_false:
#if the tuple of the 2 values are part of test_edges_false list, back to top
if ismember([idx_j, idx_i], np.array(test_edges_false)):
continue
if ismember([idx_i, idx_j], np.array(test_edges_false)):
continue
#append result to the test_edges_false list
test_edges_false.append([idx_i, idx_j]) #result sample: [[19, 2], [177, 163], [15, 119], [3, 155],...]
#get false validation edge set
val_edges_false = []
while len(val_edges_false) < len(val_edges):
idx_i = np.random.randint(0, adj.shape[0])
idx_j = np.random.randint(0, adj.shape[0])
if idx_i == idx_j:
continue
if ismember([idx_i, idx_j], train_edges):
continue
if ismember([idx_j, idx_i], train_edges):
continue
if ismember([idx_i, idx_j], val_edges):
continue
if ismember([idx_j, idx_i], val_edges):
continue
if val_edges_false:
if ismember([idx_j, idx_i], np.array(val_edges_false)):
continue
if ismember([idx_i, idx_j], np.array(val_edges_false)):
continue
val_edges_false.append([idx_i, idx_j])
r""" The assert keyword lets you test if a condition in your code returns True,
if not, the program will raise an AssertionError.
#we assert the truthfulness of these conditions.
#check to confirm that the values (arg: 1) are bitwise NOT (tilde)
#in the set of values (arg: 2) in the other list."""
assert ~ismember(test_edges_false, edges_all)
assert ~ismember(val_edges_false, edges_all)
assert ~ismember(val_edges, train_edges)
assert ~ismember(test_edges, train_edges)
assert ~ismember(val_edges, test_edges)
#get np.ones of elements of the row size of the train_edges
data = np.ones(train_edges.shape[0])
# Re-build adj matrix for the training set
r""" [ : , 0 ] means (more or less) [ first_row:last_row , column_0 ].
If you have a 2-dimensional list/matrix/array, this notation will give you all
the values in column 0 (from all rows)."""
adj_train = sp.csr_matrix((data, (train_edges[:, 0], train_edges[:, 1])), shape=adj.shape)
#add the new adjacency matrix to its transpose
adj_train = adj_train + adj_train.T
#fill all the initialised list
adj_train_l.append(adj_train)
train_edges_l.append(train_edges)
val_edges_l.append(val_edges)
test_edges_l.append(test_edges)
val_edges_false_l.append(val_edges_false)
test_edges_false_l.append(test_edges_false)
# NOTE: these edge lists only contain single direction of edge!
return adj_train_l, train_edges_l, val_edges_l, val_edges_false_l, test_edges_l, test_edges_false_l
def mask_edges_prd(adjs_list):
pos_edges_l , false_edges_l = [], []
edges_list = []
for i in range(0, len(adjs_list)):
# Function to build test set with 10% positive links
# NOTE: Splits are randomized and results might slightly deviate from reported numbers in the paper.
adj = adjs_list[i]
# Remove diagonal elements
adj = adj - sp.dia_matrix((adj.diagonal()[np.newaxis, :], [0]), shape=adj.shape)
adj.eliminate_zeros()
# Check that diag is zero:
assert np.diag(adj.todense()).sum() == 0
adj_triu = sp.triu(adj)
adj_tuple = sparse_to_tuple(adj_triu)
edges = adj_tuple[0]
edges_all = sparse_to_tuple(adj)[0]
num_false = int(edges.shape[0])
pos_edges_l.append(edges)
def ismember(a, b, tol=5):
rows_close = np.all(np.round(a - b[:, None], tol) == 0, axis=-1)
return np.any(rows_close)
edges_false = []
while len(edges_false) < num_false:
idx_i = np.random.randint(0, adj.shape[0])
idx_j = np.random.randint(0, adj.shape[0])
if idx_i == idx_j:
continue
if ismember([idx_i, idx_j], edges_all):
continue
if edges_false:
if ismember([idx_j, idx_i], np.array(edges_false)):
continue
if ismember([idx_i, idx_j], np.array(edges_false)):
continue
edges_false.append([idx_i, idx_j])
assert ~ismember(edges_false, edges_all)
false_edges_l.append(edges_false)
# NOTE: these edge lists only contain single direction of edge!
return pos_edges_l, false_edges_l
# + id="hxYyTQxJK1m4"
# loading data
path = 'drive/My Drive/Colab Notebooks/VGRNN/data/'
# # Enron dataset
with open(path+'enron_data/enron_adj_sparse_matrix.pickle', 'rb') as handle:
adj_sparse_matrix = pickle.load(handle)
with open(path+'enron_data/enron_adj_dense_matrix.pickle', 'rb') as handle:
adj_dense_matrix = pickle.load(handle)
with open(path+'enron_data/enron_edge_attribute_matrix.pickle', 'rb') as handle:
edge_attr_matrix = pickle.load(handle)
with open(path+'enron_data/enron_node_attribute_matrix.pickle', 'rb') as handle:
node_attr_matrix = pickle.load(handle)
adj_sparse_matrix = adj_sparse_matrix[7:34]
adj_dense_matrix = adj_dense_matrix[7:34]
edge_attr_matrix = edge_attr_matrix[7:34]
node_attr_matrix = node_attr_matrix[7:34]
outs = mask_edges_det(adj_sparse_matrix)
#reconstructed adjacency matrix of the training set
adj_train_l = outs[0]
#List of training edge set
train_edges_l = outs[1]
#List of validation edge set
val_edges_l = outs[2]
#List of false validation edge set(i.e., never exist)
val_edges_false_l = outs[3]
#List of test edge set
test_edges_l = outs[4]
#List of false test edge set
test_edges_false_l = outs[5]
pos_edges_l, false_edges_l = mask_edges_prd(adj_sparse_matrix)
# creating edge list
edge_idx_list = []
for i in range(len(train_edges_l)):
edge_idx_list.append(torch.tensor(np.transpose(train_edges_l[i]), dtype=torch.long))
#print('Training edges: ', edge_idx_list)
# + id="Q8EhQwXnK-iv"
# layers
class E_GCN_Conv(nn.Module):
def __init__(self, in_channels, out_channels, act=F.relu, improved=True, bias=True, num_channels=10, aggr='sum'):
super(E_GCN_Conv, self).__init__()
self.in_channels = in_channels #[64]
self.out_channels = out_channels #[32]
self.act = act
self.num_channels = num_channels
self.weight = Parameter(torch.Tensor(in_channels, out_channels, num_channels))
if bias:
self.bias = Parameter(torch.Tensor(out_channels, num_channels))
else:
self.register_parameter('bias', None)
self.reset_parameters()
if (aggr == 'concat'):
self.aggr = 'concat'
self.last_ops = nn.Linear(self.out_channels * self.num_channels, self.out_channels)
elif (aggr == 'sum'):
self.aggr = 'sum'
self.last_ops = nn.Linear(self.out_channels, self.out_channels)
def reset_parameters(self):
glorot(self.weight)
zeros(self.bias)
def forward(self, x, edge_index, edge_attr):
#add or remove node self loop. We remove in our case
edge_index, edge_attr = remove_self_loops(edge_index, edge_attr)
#edge index rows and column representation
row, col = edge_index #[21]
#normalize the adjacency matrix
#deg = scatter_add(edge_attr, row, dim=0, dim_size=x.size(0))
deg = degree(col, x.size(0), dtype=x.dtype)
deg_inv_sqrt = deg.pow(-0.5)
deg_inv_sqrt[deg_inv_sqrt == float('inf')] = 0
#reshape the row and column vectors
deg_inv_sqrt_row = deg_inv_sqrt[row].view(-1, 1) #[[1.0000],[1.0000]]
deg_inv_sqrt_col = deg_inv_sqrt[col].view(-1, 1) #[[0.5774],[0.0000]]
#multiply row and col vectors with edge weights (We replace the adjacency matrix with the edge tensor)
norm_edge = deg_inv_sqrt_row * edge_attr * deg_inv_sqrt_col #size([edge_index[row/col] No., 14])
#Slice and list the normalized vectors based on the nu. of channels
norm = []
for i in range(0, edge_attr.size()[1]):
norm.append(norm_edge[:, i:i+1])
node_state_list = []
#for each edge channels, we perform a weighted convolution with edge weights as co-efficient
for c in range(self.num_channels):
if self.in_channels > self.out_channels:
#if the weight matrix is not none
if self.weight is not None:
#matrix product of the node (hidden state) with the weight matrix
weighted_nodes = torch.matmul(x, self.weight[:, :, c]) #(size[149, 32])
else:
#otherwise, hidden state remains same
weighted_nodes = x
#if vectors are normalized
if norm is not None:
#multiply each element in the each channels of the norm with weighted hidden state
weighted_conv = torch.mul(norm[c], weighted_nodes[row]) #size(21, 32)
#propagate messages through all edges and update the nodes
weighted_conv_sum = scatter_add(weighted_conv, col, dim=0, dim_size=x.size(0)) #size(149, 32)
else:
weighted_conv_sum = scatter_add(weighted_nodes[row], col, dim=0, dim_size=x.size(0))
channel_node_state = weighted_conv_sum
else:
if norm is not None:
unweighted_conv = torch.mul(norm[c], x[row])
unweighted_conv_sum = scatter_add(unweighted_conv, col, dim=0, dim_size=x.size(0))
else:
unweighted_conv_sum = scatter_add(x[row], col, dim=0, dim_size=x.size(0))
if self.weight is not None:
channel_node_state = torch.matmul(unweighted_conv_sum.float(), self.weight[:, :, c])
#add linear bias if True
if self.bias is not None:
channel_node_state = channel_node_state + self.bias[:, c]
#pass param through a linear activation function
channel_node_state = self.act(channel_node_state)
#append each channel to node state list
node_state_list.append(channel_node_state) #size(N, 32/16)
#we consider two aggregation method across each channels of the edge weights
#1. Sum aggregation method
if (self.aggr == 'sum'):
node_states = torch.stack(node_state_list, dim=1).sum(1).float() #[N, 32]
#2. Concat aggregation method
elif (self.aggr == 'concat'):
node_states = torch.cat(node_state_list, dim=1).float()
#pass aggregated vectors through a flexible linear transformation layer
out = self.last_ops(node_states) #size(N, 32/16)
return out
def __repr__(self):
return '{}({}, {})'.format(self.__class__.__name__, self.in_channels,
self.out_channels, self.num_channels)
# + id="wwpqrN33LFIJ"
class gru_gcn(nn.Module):
def __init__(self, input_size, hidden_size, n_layer, bias=True):
super(gru_gcn, self).__init__()
self.hidden_size = hidden_size
self.n_layer = n_layer
# gru weights
self.weight_xz = []
self.weight_hz = []
self.weight_xr = []
self.weight_hr = []
self.weight_xh = []
self.weight_hh = []
for i in range(self.n_layer):
if i==0:
self.weight_xz.append(E_GCN_Conv(input_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_hz.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_xr.append(E_GCN_Conv(input_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_hr.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_xh.append(E_GCN_Conv(input_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_hh.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
else:
self.weight_xz.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_hz.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_xr.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_hr.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_xh.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
self.weight_hh.append(E_GCN_Conv(hidden_size, hidden_size, act=lambda x:x, bias=bias))
def forward(self, inp, edge_index, edge_tensor, h):
h_out = torch.zeros(h.size())
for i in range(self.n_layer):
if i==0:
z_g = torch.sigmoid(self.weight_xz[i](inp, edge_index, edge_tensor) + self.weight_hz[i](h[i], edge_index, edge_tensor))
r_g = torch.sigmoid(self.weight_xr[i](inp, edge_index, edge_tensor) + self.weight_hr[i](h[i], edge_index, edge_tensor))
h_tilde_g = torch.tanh(self.weight_xh[i](inp, edge_index, edge_tensor) + self.weight_hh[i](r_g * h[i], edge_index, edge_tensor))
h_out[i] = z_g * h[i][0: inp.size(0)] + (1 - z_g) * h_tilde_g
# out = self.decoder(h_t.view(1,-1))
else:
z_g = torch.sigmoid(self.weight_xz[i](h_out[i-1], edge_index, edge_tensor) + self.weight_hz[i](h[i], edge_index, edge_tensor))
r_g = torch.sigmoid(self.weight_xr[i](h_out[i-1], edge_index, edge_tensor) + self.weight_hr[i](h[i], edge_index, edge_tensor))
h_tilde_g = torch.tanh(self.weight_xh[i](h_out[i-1], edge_index, edge_tensor) + self.weight_hh[i](r_g * h[i], edge_index, edge_tensor))
h_out[i] = z_g * h[i] + (1 - z_g) * h_tilde_g
# out = self.decoder(h_t.view(1,-1))
out = h_out
return out, h_out
# + id="SNec0ycDLPfC"
# VGRNN model
class VGAE_Edge(nn.Module):
def __init__(self, node_feat_dim, hidden_dim, latent_var_dim, n_layers, edge_feat_dim, eps, conv='GCN', bias=False):
super(VGAE_Edge, self).__init__()
#input dimension
self.node_feat_dim = node_feat_dim
self.eps = eps
#hidden_layer dim.
self.hidden_dim = hidden_dim #32
#latent variable dim.
self.latent_var_dim = latent_var_dim #10
self.n_layers = n_layers #1
self.edge_feat_dim = edge_feat_dim
if conv == 'GCN':
#flexible sequential neural network linear transformations
self.input_emb = nn.Sequential(nn.Linear(node_feat_dim, hidden_dim), nn.ReLU())
self.output_emb = nn.Sequential(nn.Linear(latent_var_dim, hidden_dim))
#encoder functions
self.encoder = E_GCN_Conv(hidden_dim + hidden_dim, hidden_dim)
self.encoder_mu = E_GCN_Conv(hidden_dim, latent_var_dim, act=lambda x:x)
self.encoder_sigma = E_GCN_Conv(hidden_dim, latent_var_dim, act=F.softplus)
#linear linear transformation of the prior functions
self.prior = nn.Sequential(nn.Linear(hidden_dim, hidden_dim), nn.ReLU())
self.prior_mu = nn.Sequential(nn.Linear(hidden_dim, latent_var_dim))
self.prior_sigma = nn.Sequential(nn.Linear(hidden_dim, latent_var_dim), nn.Softplus())
#recurrent neural networks model function
self.rnn = gru_gcn(hidden_dim + hidden_dim, hidden_dim, n_layers, bias)
def forward(self, x, edge_idx_list, edge_attr_matrix, hidden_in=None):
#assert the length of edge matrix = elngth of the edge indices
assert len(edge_attr_matrix) == len(edge_idx_list)
#print(x.size()) #[26, 149, 6]
#initialize params
kld_loss = 0
l2_loss = 0
encoder_mu_list, encoder_sigma_list = [], []
prior_mu_list, prior_sigma_list = [], []
decoded_list, z_list = [], []
kld_loss_list, l2_loss_list = [], []
#hidden var will be none in the first set of operations
if hidden_in is None:
#so we create a matrix of zeros as initial representation
h = torch.zeros(self.n_layers, x.size(1), self.hidden_dim) #size([1, 149, 32])
else:
#hidden var here will be the recurrent vectors
h = hidden_in
for t in range(x.size(0)):
#linearly transform x features
input_emb_t = self.input_emb(x[t].float()) #[149, 32]
#edge indices at time t
edge_idx_list_t = edge_idx_list[t]
#edge tensor matrix at time t => extract on the tensors associated with the edge indices at time t
edge_tensor_t = (edge_attr_matrix[t][edge_idx_list_t[0], edge_idx_list_t[1]])#[:, 0:latent_var_dim]
#encoder
#encoders conditioned on priors so features of previous states can be
#recurrently modeled
encoder_t = self.encoder(torch.cat([input_emb_t, h[-1]], 1), edge_idx_list_t, edge_tensor_t) #[149, 32]
#encoder mean
encoder_mu_t = self.encoder_mu(encoder_t, edge_idx_list_t, edge_tensor_t) #[149, 16]
#encoder standard deviation
encoder_sigma_t = self.encoder_sigma(encoder_t, edge_idx_list_t, edge_tensor_t) #[149, 16]
#prior
prior_t = self.prior(h[-1]) #[149, 32]
prior_mu_t = self.prior_mu(prior_t) #[149, 16]
prior_sigma_t = self.prior_sigma(prior_t) #[149, 16]
#sampling and reparameterization
z_t = self._reparameterized_sample(encoder_mu_t, encoder_sigma_t) #[149, 16]
#apply a fully connected layer to z_t
output_emb_t = self.output_emb(z_t) #[149, 32]
#decoder function -> takes the linearly transformed latent variable and egde indices as args
#decoder_t = self.dec(z_t, edge_idx_list_t)
decoder_t = self.dec(output_emb_t, edge_idx_list_t)
#recurrence
_, h = self.rnn(torch.cat([input_emb_t, output_emb_t], 1), edge_idx_list_t, edge_tensor_t, h) #[1, 149, 32]
num_nodes = edge_attr_matrix[t].size(0)
encoder_mu_t_slice = encoder_mu_t[0:num_nodes, :]
encoder_sigma_t_slice = encoder_sigma_t[0:num_nodes, :]
prior_mu_t_slice = prior_mu_t[0:num_nodes, 0:num_nodes]
prior_sigma_t_slice = prior_sigma_t[0:num_nodes, :]
#computing losses
kld_loss_t = self.kl_divergence(encoder_mu_t_slice, encoder_sigma_t_slice, prior_mu_t_slice, prior_sigma_t_slice)
kld_loss_list.append(kld_loss_t)
kld_loss = kld_loss + kld_loss_t
#kld_loss += self.kl_divergence_zu(encoder_mu_t, encoder_sigma_t)
l2_loss_t = self._l2_norm(decoder_t, edge_tensor_t)
l2_loss_list.append(l2_loss_t)
l2_loss = l2_loss + l2_loss_t
encoder_sigma_list.append(encoder_sigma_t_slice)
encoder_mu_list.append(encoder_mu_t_slice)
prior_mu_list.append(prior_mu_t_slice)
prior_sigma_list.append(prior_sigma_t_slice)
decoded_list.append(decoder_t)
z_list.append(z_t)
#print(decoded_list)
return kld_loss, l2_loss, encoder_mu_list, prior_mu_list, decoded_list, h, kld_loss_list, l2_loss_list
#decoder function
def dec(self, z, edge_index):
#input features have dimension = the col sizes of Zi and Zj
in_feat = int(z.size(1))
#output feature has the size of the edge channels
out_feat = int(self.edge_feat_dim)
#output = neural network decoder
outputs = Decoder(in_feat, out_feat, act=lambda x:x)(z, edge_index)
return outputs
def reset_parameters(self, stdv=1e-1):
for weight in self.parameters():
weight.data.normal_(0, stdv)
def _init_weights(self, stdv):
pass
def _reparameterized_sample(self, mean, std):
eps1 = torch.FloatTensor(std.size()).normal_()
eps1 = Variable(eps1)
return eps1.mul(std).add_(mean)
#VAE loss function regularizer
def kl_divergence(self, encoder_mu, encoder_sigma, prior_mu, prior_sigma):
mu_size = encoder_mu.size(0)
encoder_sigma_log = torch.log(encoder_sigma + self.eps)
prior_sigma_log = torch.log(prior_sigma + self.eps)
encoder_sigma = encoder_sigma + self.eps
prior_sigma = prior_sigma + self.eps
kld_element = (2 * prior_sigma_log - 2 * encoder_sigma_log + (torch.pow(encoder_sigma, 2) + torch.pow(encoder_mu - prior_mu, 2)) /
torch.pow(prior_sigma, 2) - 1)
kld_element = kld_element.detach().numpy()
kld_element = torch.tensor(np.nan_to_num(kld_element, copy=True, nan=0.0))
kld = (0.5 / mu_size) * kld_element.sum(1).mean()
return kld
def kl_divergence_zu(self, mu, sigma):
mu_size = mu.size(0)
sigma_log = torch.log(sigma + self.eps)
kld_element = (1 + 2*sigma_log - (sigma**2) - (mu**2))
kld_element = kld_element.detach().numpy()
kld_element = torch.tensor(np.nan_to_num(kld_element, copy=True, nan=0.0))
return (-0.5 / mu_size) * kld_element.sum(1).mean()
def regularizer(self, samples_size, features_size, lambda_value=0.01):
m = samples_size
n = features_size
W = torch.randn(m, n)
reg_term = (lambda_value / (2 * m)) * torch.sum(torch.square(W))
#print(reg_term)
return reg_term
#reconstruction loss - (nll for binary classification model)
#the log likelihood of the true observation given the predicted distribution
def _l2_norm(self, pred, actual):
x_size_row = actual.size(0)
x_size_col = actual.size(1)
#l2_reg = self.regularizer(x_size_row, x_size_col)
loss = nn.MSELoss(reduction='mean')
l2_loss = loss(input=pred.float(), target=actual.float())
l2_loss_val = (1.0 / x_size_row) * l2_loss
l2_norm = l2_loss_val #+ l2_reg
return l2_norm
# + id="1yDfLMrsLUk5"
class Decoder(nn.Module):
def __init__(self, in_feat, out_feat, act=torch.sigmoid):
super(Decoder, self).__init__()
self.act = act
self.in_feat = in_feat
self.out_feat = out_feat
self.edge_nn_mean = nn.Sequential(nn.Linear(self.in_feat, self.out_feat))
self.edge_nn_std = nn.Sequential(nn.Linear(self.in_feat, self.out_feat), nn.Softplus())
self.edge_nn = nn.Linear(self.in_feat * 2, self.out_feat, bias=False)
def forward(self, z, edge_index):
z = F.dropout(z, p=0., training=True)
#emb_mean = self.edge_nn_mean(z)
#emb_std = self.edge_nn_std(z)
#x_hat = emb_std.add(emb_mean) #size[149, 10]
#embeddings of edge_index 0
z0_emb = z[edge_index[0]] #size[N, 10]
#embeddings of edge index 1
z1_emb = z[edge_index[1]]
#concatenate the embeddings of the edge indices
#edge_mult = (z0_emb * z1_emb)#.sum(dim=1) #size[N, 20]
edge_mult = torch.cat([z0_emb, z1_emb], dim=-1)
r'''pass through a neural network. Sigmoid activation function can be used
in case of binomial cross-entropy problem.'''
#For regression task, just a linear or identity function is okay
edge_emb = self.edge_nn(edge_mult) #size[N, 10]
return edge_emb
# + id="6sko45PpLa6x"
r""" Calculate and evaluate the Area Under (Receiver Operating Characteristic) Curve
and the Average Precision (AP) """
# evaluation function
def get_edge_eval_scores(edges_pos, edges_neg, edge_attr_matrix, edge_feat_dim, edge_emb):
mse, rmse = [], []
mae, mape = [], []
in_channel = edge_emb[0].size(1)
out_channel = edge_feat_dim
nn_layer = nn.Linear(in_channel*2, out_channel, bias=False)
for i in range(len(edges_pos)):
# Predict on test set of edges
#explicitly remove the computational graph of the tensor
#(from gradient descent) with detach and change back to numpy
emb = edge_emb[i].detach().numpy()
z_emb = torch.tensor(emb)
#edge tensor matrix
edge_attr_mat = edge_attr_matrix[i]
#initialize predicted edge list
pred_pos_edges, pos_edges = [], []
pos_edge_list = []
pos_list = []
for e in edges_pos[i]:
z_i = z_emb[e[0]]
z_j = z_emb[e[1]]
cat_embs = torch.cat([z_i, z_j], dim=-1)
pos_embs_var = nn_layer(cat_embs).detach().numpy()
#append the sigmoid computation of the reconstructed embedding matrix
#Note: we can also consider without sigmoid as our task is regression like
pred_pos_edges.append(pos_embs_var)
#positive edge tensor
pos_edges.append((edge_attr_mat[e[0], e[1]]))
pos_edge_list.append((e[0], e[1], pos_embs_var))
pos_list.append((e[0], e[1], (edge_attr_mat[e[0], e[1]])))
pred_neg_edges, neg_edges = [], []
neg_edge_list = []
neg_list = []
for e in edges_neg[i]:
z_i = z_emb[e[0]]
z_j = z_emb[e[1]]
cat_embs = torch.cat([z_i, z_j], dim=-1)
neg_embs_var = nn_layer(cat_embs).detach().numpy()
pred_neg_edges.append(neg_embs_var)
neg_edges.append((edge_attr_mat[e[0], e[1]]))
neg_edge_list.append((e[0], e[1], neg_embs_var))
neg_list.append((e[0], e[1], (edge_attr_mat[e[0], e[1]])))
#stack up the positive and negative predicted features
pred_all_edges = np.hstack([pred_pos_edges, pred_neg_edges])
#for error free mean square eval
pos_edges = [t.detach().numpy() for t in pos_edges]
neg_edges = [t.detach().numpy() for t in neg_edges]
#stack up all positive and negative features
all_true_edges = np.hstack([pos_edges, neg_edges])
#evaluate the mean square error loss of the ground truth and predicted values
mse.append(mean_squared_error(all_true_edges, pred_all_edges))
rmse.append(mean_squared_error(all_true_edges, pred_all_edges, squared=False))
mae.append(mean_absolute_error(all_true_edges, pred_all_edges))
mape.append(mean_absolute_error(all_true_edges, pred_all_edges)*100)
return mse, rmse, mae, mape, pos_edge_list, neg_edge_list, pos_list, neg_list
# + id="42a_R7tBLiYF"
# hyperparameters
hidden_dim = 32
latent_var_dim = 16
n_layers = 1
clip = 10
learning_rate = 1e-2
num_nodes = node_attr_matrix[0].shape[1]
node_feat_dim = num_nodes
edge_feat_dim = 10
timesteps_len = len(train_edges_l) #27
#print(timesteps_len)
eps = 1e-10
conv_type='GCN'
# creating input tensors
node_attr = torch.stack(node_attr_matrix) #[80, 149, 6]
adj_label_list = []
for i in range(len(adj_train_l)):
temp_matrix = adj_train_l[i]
adj_label_list.append(torch.tensor(temp_matrix.toarray().astype(np.float32)))
# building model
model = VGAE_Edge(node_feat_dim, hidden_dim, latent_var_dim, n_layers, edge_feat_dim, eps, conv=conv_type, bias=True)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
#print(model)
# training
timesteps_init = 0
#25% of 36
timesteps_end = timesteps_len - 1 #27
#print(timesteps_end)
test_init = 0
#train_edges_l
training_edges = (train_edges_l[timesteps_end: timesteps_len])
edge_train = (edge_attr_matrix[timesteps_end: timesteps_len])
#writer = SummaryWriter('drive/MyDrive/Colab Notebooks/VGRNN/tensorboard_log/' + datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
start_time = time.monotonic()
for k in range(1000):
optimizer.zero_grad()
kld_loss, l2_loss, _, _, _, hidden_state, kld_list, loss_list = model(node_attr[timesteps_init:timesteps_end]
, edge_idx_list[timesteps_init:timesteps_end]
, edge_attr_matrix[timesteps_init:timesteps_end]
)
#print(kld_list)
#print(loss_list)
loss = kld_loss + l2_loss
loss.backward()
optimizer.step()
nn.utils.clip_grad_norm_(model.parameters(), clip)
if k > test_init:
kld_loss, l2_loss, encs_, priors_, edge_dec, _, kld_list, loss_list = model(node_attr[timesteps_end:timesteps_len]
, edge_idx_list[timesteps_end:timesteps_len]
, edge_attr_matrix[timesteps_end:timesteps_len]
, hidden_state)
mse_val, rmse_val, mae_val, mape_val, pred_pos, pred_neg, pos, neg = get_edge_eval_scores(
pos_edges_l[timesteps_end:timesteps_len]
, false_edges_l[timesteps_end:timesteps_len]
, edge_attr_matrix[timesteps_end:timesteps_len]
, edge_feat_dim
, priors_
)
#mse_val, rmse_val, mae_val, mape_val, pred_pos, pred_neg, pos, neg = get_edge_eval_scores(
# val_edges_l[timesteps_end:timesteps_len]
#, val_edges_false_l[timesteps_end:timesteps_len]
#, edge_attr_matrix[timesteps_end:timesteps_len]
#, edge_feat_dim
#, priors_
#)
#mse_test, rmse_test, mae_test, mape_test, pred_pos, pred_neg, pos, neg = get_edge_eval_scores(
# test_edges_l[timesteps_end:timesteps_len]
#, test_edges_false_l[timesteps_end:timesteps_len]
#, edge_attr_matrix[timesteps_end:timesteps_len]
#, edge_feat_dim
#, priors_
#)
#Note: Prior mean reduces the loss than the decoded variables.
print('********************************************************')
print('epoch: ', k)
print('\nLOSS => kld_loss: {} | l2_loss: {} | loss: {}'.format(round(kld_loss.mean().item(), 4)
, round(l2_loss.mean().item(), 4)
, round(loss.mean().item(), 4)
))
#writer.add_scalar("Loss/train", loss.mean().item(), k)
if k > test_init:
#writer.add_scalar("validation mean_score", np.mean(np.array(mse_val)), k)
#writer.add_scalar("test mean_score", np.mean(np.array(mse_test)), k)
print('\nEDGE RECONSTRUCTION VAL => mse: {} | rmse: {} | mae: {} | mape: {}'.format(
round(np.mean(np.array(mse_val)), 4)
, round(np.mean(np.array(rmse_val)), 4)
, round(np.mean(np.array(mae_val)), 4)
, round(np.mean(np.array(mape_val)), 4)
))
#print('\nEDGE RECONSTRUCTION TEST => mse: {} | rmse: {} | mae: {} | mape: {}'.format(
# round(np.mean(np.array(mse_test)), 4)
#, round(np.mean(np.array(rmse_test)), 4)
#, round(np.mean(np.array(mae_test)), 4)
#, round(np.mean(np.array(mape_test)), 4)
#))
#print('Pos: ', pos)
#print('Pos_Pred: ', pred_pos)
#print('\nNeg: ', neg)
#print('Neg Pred: ', pred_neg)
#writer.flush()
#writer.close()
end_time = time.monotonic()
print('Total Execution Time: {}'.format(timedelta(seconds=end_time - start_time)))
# #!pip install tensorboard
# #!tensorboard --logdir='drive/MyDrive/Colab Notebooks/VGRNN/tensorboard_log/'
| edge_reconstruction_main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example usage of SeisSrcMoment to calculate $M_W$ in the time domain
# This example is for an icequake at Vatnajokull, Iceland. The moment magnitude, $M_W$, is calculated in the time domain, i.e. the long period spectral level is calculated from the integral of the displacement through time.
# ## 1. Specify parameters to use:
import numpy as np
from SeisSrcMoment import moment
# Specify variables:
stations_to_calculate_moment_for = ["SKR01", "SKR02", "SKR03", "SKR04", "SKR05", "SKR06", "SKR07"]
stations_not_to_process = ["SKG08", "SKG09", "SKG10", "SKG11", "SKG12", "SKG13", "GR01", "GR02", "GR03","GR04","BARD"]
mseed_filename = "data/mseed_data/20140629184210331.m"
inventory_fname = None
instruments_gain_filename = "data/instrument_gain_data.txt" # File with instrument name, instrument gains (Z,N,E) and digitaliser gains (Z,N,E)
NLLoc_event_hyp_filename = "data/NLLoc_data/loc.Tom__RunNLLoc000.20140629.184210.grid0.loc.hyp"
window_before_after = [0.004, 0.196] # The time before and after the phase pick to use for calculating the magnitude within
filt_freqs = []
# MT_data_filename = "data/MT_data/20140629184210363MT.mat"
MT_six_tensor = np.array([1.,1.,1.,0.,0.,0.]) # Explosion in this example.
density = 917. # Density of medium, in kg/m3
Vp = 3630. # P-wave velocity in m/s
Q = 150. # Quality factor for the medium
verbosity_level = 1 # Verbosity level (1 for moment only) (2 for major parameters) (3 for plotting of traces)
# ## Run moment calculation:
# +
# Find seismic moment release:
av_M_0, std_err_seis_M_0, n_obs, event_obs_dict = moment.calc_moment(mseed_filename, NLLoc_event_hyp_filename, stations_to_calculate_moment_for, density, Vp, inventory_fname=inventory_fname, instruments_gain_filename=instruments_gain_filename, Q=Q, window_before_after=window_before_after, filt_freqs=filt_freqs, stations_not_to_process=stations_not_to_process, MT_six_tensor=MT_six_tensor, verbosity_level=verbosity_level)
print("Seismic moment release (Nm):", av_M_0)
# And find corresponding moment magnitude, M_w (Hanks and Kanamori 1979):
M_w = (2./3.)*np.log10(av_M_0) - 6.0
print("Local moment magnitude, M:", M_w)
# -
| examples/.ipynb_checkpoints/example-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import division
import pickle
import os
import types
import random
import uuid
import math
from copy import deepcopy as copy
import gym
from gym import spaces
from gym.envs.classic_control import rendering
import numpy as np
import tensorflow as tf
from scipy.misc import logsumexp
# -
from matplotlib import pyplot as plt
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
# %matplotlib inline
import matplotlib as mpl
mpl.rc('savefig', dpi=300)
mpl.rc('text', usetex=True)
data_dir = os.path.join('data', '3.0-continuous-ime')
sess = tf.Session()
# create envs, pilot policies
n_train_tasks = 49
n_act_dim = 4
n_obs_dim = 4
gamma = 0.99
max_ep_len = 200
accel = 0.01
goal_dist_thresh = 2*accel
succ_rew_bonus = 1
crash_rew_penalty = -1
max_speed = 10*accel
is_succ = lambda r: r[-1][2] > succ_rew_bonus / 2
is_crash = lambda r: r[-1][2] < crash_rew_penalty / 2
train_goals = np.random.random((n_train_tasks, 2))
plt.scatter(train_goals[:, 0], train_goals[:, 1], linewidth=0, color='gray', s=100, marker='*')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.show()
with open(os.path.join(data_dir, 'train_goals.pkl'), 'wb') as f:
pickle.dump(train_goals, f, pickle.HIGHEST_PROTOCOL)
with open(os.path.join(data_dir, 'train_goals.pkl'), 'rb') as f:
train_goals = pickle.load(f)
def make_reward_func(goal):
def reward_shaping(obs):
return -np.linalg.norm((obs[:2] - goal))
def reward_func(prev_obs, action, obs):
pos = obs[:2]
if (pos < 0).any() or (pos >= 1).any():
r = crash_rew_penalty
elif (np.abs(pos - goal) <= goal_dist_thresh).all():
r = succ_rew_bonus
else:
r = gamma * reward_shaping(obs) - reward_shaping(prev_obs)
return r
return reward_func
class PointMassNav(gym.Env):
metadata = {
'render.modes': ['human']
}
def __init__(
self,
using_inertia=True,
max_ep_len=max_ep_len,
reward_func=None,
goal=None,
rand_goal=False,
expose_goal=False
):
self.expose_goal = expose_goal
if self.expose_goal:
lows = np.ones(n_obs_dim + 2) * -1
highs = np.ones(n_obs_dim + 2) * 2
else:
lows = np.ones(n_obs_dim) * -1
highs = np.ones(n_obs_dim) * 2
self.observation_space = spaces.Box(lows, highs)
self.action_space = spaces.Discrete(n_act_dim)
self.pos = None
self.vel = None
self.curr_step = None
self.viewer = None
self.curr_obs = None
self.succ_rew_bonus = succ_rew_bonus
self.max_ep_len = max_ep_len
self.reward_func = reward_func
self.using_inertia = using_inertia
self.goal = goal
self.rand_goal = rand_goal
def _obs_of_pos_vel(self, pos, vel):
if self.expose_goal:
return np.concatenate((pos, vel, self.goal))
else:
return np.concatenate((pos, vel))
def _obs(self):
self.curr_obs = self._obs_of_pos_vel(self.pos, self.vel)
return self.curr_obs
def _next_pos_vel(self, pos, vel, action):
next_pos = copy(pos)
if self.using_inertia:
next_vel = copy(vel)
else:
next_vel = np.zeros(2)
if action == 0: # left
next_vel[1] -= accel
elif action == 1: # right
next_vel[1] += accel
elif action == 2: # up
next_vel[0] -= accel
elif action == 3: # down
next_vel[0] += accel
else:
raise ValueError('invalid action')
next_vel = np.maximum(np.minimum(next_vel, max_speed), -max_speed)
next_pos += next_vel
return next_pos, next_vel
def _step(self, action):
self.pos, self.vel = self._next_pos_vel(self.pos, self.vel, action)
self.curr_step += 1
succ = (np.abs(self.pos - self.goal) <= goal_dist_thresh).all()
oob = (self.pos < 0).any() or (self.pos >= 1).any()
oot = self.curr_step >= self.max_ep_len
obs = self._obs()
r = self.reward_func(self.prev_obs, action, obs)
done = oot or succ or oob
info = {}
self.prev_obs = obs
return obs, r, done, info
def _reset(self):
self.pos = np.random.random(2)
self.vel = np.zeros(2)
if self.rand_goal:
self.goal = np.random.random(2)
self.reward_func = make_reward_func(self.goal)
self.curr_step = 0
self.prev_obs = self._obs()
return self.prev_obs
def _render(self, mode='human', close=False):
if close:
if self.viewer is not None:
self.viewer.close()
self.viewer = None
return
if self.viewer is None:
self.viewer = rendering.SimpleImageViewer()
fig = plt.figure()
canvas = FigureCanvas(fig)
plt.scatter([self.goal[0]], [self.goal[1]], color='gray', linewidth=0, alpha=0.75, marker='*')
plt.scatter([self.pos[0]], [self.pos[1]], color='orange', linewidth=0, alpha=0.75)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.axis('off')
agg = canvas.switch_backends(FigureCanvas)
agg.draw()
width, height = fig.get_size_inches() * fig.get_dpi()
self.viewer.imshow(np.fromstring(agg.tostring_rgb(), dtype='uint8').reshape(int(height), int(width), 3))
plt.close()
train_reward_funcs = [make_reward_func(goal) for goal in train_goals]
train_newton_envs = [PointMassNav(reward_func=r, goal=train_goals[i], using_inertia=True) for i, r in enumerate(train_reward_funcs)]
train_aristotle_envs = [PointMassNav(reward_func=r, goal=train_goals[i], using_inertia=False) for i, r in enumerate(train_reward_funcs)]
def run_ep(policy, env, max_ep_len=max_ep_len, render=False, task_idx=None):
obs = env.reset()
done = False
totalr = 0.
prev_obs = obs
rollout = []
for step_idx in range(max_ep_len+1):
if done:
break
action = policy(obs)
obs, r, done, info = env.step(action)
rollout.append((prev_obs, action, r, obs, float(done), task_idx))
prev_obs = obs
if render:
env.render()
totalr += r
return rollout
def make_aristotle_pilot_policy(goal, denoise=False):
eps = accel if denoise else 0
gx, gy = goal
def aristotle_pilot_policy(obs):
x, y = obs[:2]
up = gx<x-eps
down = gx>x+eps
left = gy<y-eps
right = gy>y+eps
lr = left or right
ud = up or down
if lr and (not ud or np.random.random() < 0.5):
if left:
return 0
elif right:
return 1
elif ud:
if up:
return 2
elif down:
return 3
return 0
return aristotle_pilot_policy
aristotle_pilot_policies = [make_aristotle_pilot_policy(goal) for goal in train_goals]
# sanity-check envs, agents
train_task_idx = 0
run_ep(aristotle_pilot_policies[train_task_idx], train_aristotle_envs[train_task_idx], render=True)
train_aristotle_envs[train_task_idx].close()
run_ep(aristotle_pilot_policies[train_task_idx], train_newton_envs[train_task_idx], render=True)
train_newton_envs[train_task_idx].close()
# fit internal dynamics model
n_train_rollouts_per_env = 1000
demo_rollouts = [[run_ep(aristotle_pilot_policies[train_task_idx], newton_env, render=False, task_idx=train_task_idx)
for _ in range(n_train_rollouts_per_env)]
for train_task_idx, newton_env in enumerate(train_newton_envs)]
with open(os.path.join(data_dir, 'aristotle_pilot_policy_demo_rollouts.pkl'), 'wb') as f:
pickle.dump(demo_rollouts, f, pickle.HIGHEST_PROTOCOL)
with open(os.path.join(data_dir, 'aristotle_pilot_policy_demo_rollouts.pkl'), 'rb') as f:
demo_rollouts = pickle.load(f)
def build_mlp(
input_placeholder,
output_size,
scope,
n_layers=1,
size=256,
activation=tf.nn.relu,
output_activation=None,
reuse=False
):
out = input_placeholder
with tf.variable_scope(scope, reuse=reuse):
for _ in range(n_layers):
out = tf.layers.dense(out, size, activation=activation)
out = tf.layers.dense(out, output_size, activation=output_activation)
return out
# +
def onehot_encode(i, n):
x = np.zeros(n)
x[i] = 1
return x
def onehot_decode(x):
l = np.nonzero(x)[0]
assert len(l) == 1
return l[0]
n_obs_feats = n_obs_dim
def featurize_obs(s):
return s
n_act_feats = 2
feats_of_act = np.array([
[0, -1],
[0, 1],
[-1, 0],
[1, 0]
], dtype=float) * accel
def featurize_act(a):
return feats_of_act[a, :]
# -
def vectorize_rollouts(rollouts):
obs = [[] for _ in range(n_train_tasks)]
actions = [[] for _ in range(n_train_tasks)]
for task_idx, task_rollouts in enumerate(rollouts):
for task_rollout in task_rollouts:
more_obs, more_actions = list(zip(*task_rollout))[:2]
obs[task_idx].extend([featurize_obs(s) for s in more_obs])
actions[task_idx].extend(more_actions)
l = min(len(x) for x in obs)
idxes = [random.sample(list(range(len(x))), l) for x in obs]
f = lambda x: np.array(x[1])[idxes[x[0]]]
obs = np.array(list(map(f, enumerate(obs))))
actions = np.array(list(map(f, enumerate(actions))))
return obs, actions
demo_obs = None
demo_actions = None
demo_next_obs = None
demo_task_idxes = None
train_demo_example_idxes = None
val_demo_batch = None
def process_demo_rollouts(demo_rollouts):
global demo_obs
global demo_actions
global demo_next_obs
global demo_task_idxes
global train_demo_example_idxes
global val_demo_batch
vectorized_demo_rollouts = vectorize_rollouts(demo_rollouts)
demo_obs, demo_actions = vectorized_demo_rollouts
demo_example_idxes = list(range(demo_obs.shape[1]))
random.shuffle(demo_example_idxes)
n_train_demo_examples = int(0.9 * len(demo_example_idxes))
train_demo_example_idxes = demo_example_idxes[:n_train_demo_examples]
val_demo_example_idxes = demo_example_idxes[n_train_demo_examples:]
val_demo_batch = demo_obs[:, val_demo_example_idxes], demo_actions[:, val_demo_example_idxes]
process_demo_rollouts(demo_rollouts)
def sample_batch(size):
idxes = random.sample(train_demo_example_idxes, size)
demo_batch = demo_obs[:, idxes], demo_actions[:, idxes]
return demo_batch
# +
gamma = 0.99
iterations = 100000
learning_rate = 1e-3
batch_size = 512 // n_train_tasks
sq_td_err_penalty = 1e0
q_n_layers = 1
q_layer_size = 32
q_activation = tf.nn.relu
q_output_activation = None
constraint_sampling_freq = 100000
constraint_batch_size = batch_size
n_constraint_rollouts_per_env = 500
val_update_freq = 100
n_val_eval_rollouts = 100
# -
im_scope = str(uuid.uuid4())
q_scope = str(uuid.uuid4())
# +
demo_obs_t_ph = tf.placeholder(tf.float32, [n_train_tasks, None, n_obs_feats])
demo_act_t_ph = tf.placeholder(tf.int32, [n_train_tasks, None])
demo_batch_size_ph = tf.placeholder(tf.int32)
constraint_obs_t_ph = tf.placeholder(tf.float32, [n_train_tasks, None, n_obs_feats])
constraint_act_t_ph = tf.placeholder(tf.int32, [n_train_tasks, None])
constraint_act_t_feats_ph = tf.placeholder(tf.float32, [n_train_tasks, None, n_act_feats])
constraint_batch_size_ph = tf.placeholder(tf.int32)
# +
demo_batch_idxes = tf.reshape(
tf.range(0, demo_batch_size_ph, 1),
[demo_batch_size_ph, 1])
extract_task = lambda x, i: tf.squeeze(tf.gather(x, tf.convert_to_tensor(
[i], dtype=tf.int32)), axis=[0])
demo_q_t = tf.stack([tf.gather_nd(
build_mlp(
extract_task(demo_obs_t_ph, train_task_idx),
n_act_dim, q_scope+'-'+str(train_task_idx),
n_layers=q_n_layers, size=q_layer_size,
activation=q_activation, output_activation=q_output_activation
),
tf.concat([
demo_batch_idxes,
tf.expand_dims(extract_task(demo_act_t_ph, train_task_idx), 1)], axis=1)
) for train_task_idx in range(n_train_tasks)], axis=0)
demo_v_t = tf.reduce_logsumexp(
tf.stack([build_mlp(
extract_task(demo_obs_t_ph, train_task_idx),
n_act_dim, q_scope+'-'+str(train_task_idx),
n_layers=q_n_layers, size=q_layer_size,
activation=q_activation, output_activation=q_output_activation,
reuse=True
) for train_task_idx in range(n_train_tasks)], axis=0),
axis=2)
act_log_likelihoods = demo_q_t - demo_v_t
# -
neg_avg_log_likelihood = -tf.reduce_mean(act_log_likelihoods)
# +
constraint_act_t_feats_reshaped = tf.reshape(
constraint_act_t_feats_ph, [n_train_tasks*constraint_batch_size_ph, n_act_feats])
constraint_obs_t_reshaped = tf.reshape(
constraint_obs_t_ph, [n_train_tasks*constraint_batch_size_ph, n_obs_feats])
# -
assert n_obs_feats == 4
assert n_act_feats == 2
# +
int_dyn_A_fixed = np.zeros((n_obs_feats, 2))
int_dyn_A_fixed[[0, 1], [0, 1]] = 1
int_dyn_A_top = np.zeros((2, 2))
int_dyn_A_top[[0, 1], [0, 1]] = 1
int_dyn_A_top = tf.convert_to_tensor(int_dyn_A_top, tf.float32)
int_dyn_A_top *= 1 / (1 + tf.exp(-tf.get_variable(
im_scope+'-A-top', [1],
initializer=tf.random_normal_initializer)))
int_dyn_A_bot = np.zeros((2, 2))
int_dyn_A_bot[[0, 1], [0, 1]] = 1
int_dyn_A_bot = tf.convert_to_tensor(int_dyn_A_bot, tf.float32)
int_dyn_A_bot *= 1 / (1 + tf.exp(-tf.get_variable(
im_scope+'-A-bot', [1],
initializer=tf.random_normal_initializer)))
int_dyn_A = tf.concat([
tf.convert_to_tensor(int_dyn_A_fixed, tf.float32),
tf.concat([int_dyn_A_top, int_dyn_A_bot], axis=0)
], axis=1)
# +
int_dyn_B_vel = np.zeros((n_obs_feats, n_act_feats))
int_dyn_B_vel[[0, 1], [0, 1]] = 1
int_dyn_B_vel = tf.convert_to_tensor(int_dyn_B_vel, tf.float32)
int_dyn_B_acc = np.zeros((n_obs_feats, n_act_feats))
int_dyn_B_acc[[2, 3], [0, 1]] = 1
int_dyn_B_acc = tf.convert_to_tensor(int_dyn_B_acc, tf.float32)
int_dyn_B_switch = 1 / (1 + tf.exp(-tf.get_variable(
im_scope+'-B', [1],
initializer=tf.random_normal_initializer)))
int_dyn_B = int_dyn_B_switch * int_dyn_B_vel + (1 - int_dyn_B_switch) * int_dyn_B_acc
# +
int_dyn_A_mask = np.zeros((n_obs_feats, n_obs_feats))
mask_idxes = [[0, 0], [0, 2], [1, 1], [1, 3], [2, 2], [3, 3]]
for x, y in mask_idxes:
int_dyn_A_mask[x, y] = 1
int_dyn_A *= int_dyn_A_mask
int_dyn_B_mask = np.zeros((n_obs_feats, n_act_feats))
mask_idxes = [[0, 0], [1, 1], [2, 0], [3, 1]]
for x, y in mask_idxes:
int_dyn_B_mask[x, y] = 1
int_dyn_B *= int_dyn_B_mask
# -
constraint_obs_tp1 = tf.reshape(
tf.transpose(tf.matmul(int_dyn_A, tf.transpose(
constraint_obs_t_reshaped)) + tf.matmul(int_dyn_B, tf.transpose(
constraint_act_t_feats_reshaped))),
[n_train_tasks, constraint_batch_size_ph, n_obs_feats])
q_tp1 = tf.stack([build_mlp(
extract_task(constraint_obs_tp1, train_task_idx),
n_act_dim, q_scope+'-'+str(train_task_idx),
n_layers=q_n_layers, size=q_layer_size,
activation=q_activation, output_activation=q_output_activation,
reuse=True) for train_task_idx in range(n_train_tasks)], axis=0)
v_tp1 = tf.reduce_logsumexp(q_tp1, axis=2)
rew_ts = []
for train_task_idx in range(n_train_tasks):
goal_x = tf.convert_to_tensor(train_goals[train_task_idx, 0], dtype=tf.float32)
goal_y = tf.convert_to_tensor(train_goals[train_task_idx, 1], dtype=tf.float32)
constraint_obs_tp1_of_task = extract_task(constraint_obs_tp1, train_task_idx)
constraint_obs_t_of_task = extract_task(constraint_obs_t_ph, train_task_idx)
pos_x_tp1 = tf.gather(constraint_obs_tp1_of_task, tf.convert_to_tensor(
[0], dtype=tf.int32), axis=1)
pos_y_tp1 = tf.gather(constraint_obs_tp1_of_task, tf.convert_to_tensor(
[1], dtype=tf.int32), axis=1)
pos_x_t = tf.gather(constraint_obs_t_of_task, tf.convert_to_tensor(
[0], dtype=tf.int32), axis=1)
pos_y_t = tf.gather(constraint_obs_t_of_task, tf.convert_to_tensor(
[1], dtype=tf.int32), axis=1)
dist_to_goal_t = tf.sqrt((pos_x_t-goal_x)**2+(pos_y_t-goal_y)**2)
dist_to_goal_tp1 = tf.sqrt((pos_x_tp1-goal_x)**2+(pos_y_tp1-goal_y)**2)
crashed_t = tf.logical_or(tf.logical_or(tf.logical_or(
pos_x_tp1 < 0, pos_y_tp1 < 0), pos_x_tp1 >= 1), pos_y_tp1 >= 1)
succed_t = tf.logical_and(
tf.abs(pos_x_tp1-goal_x) <= goal_dist_thresh,
tf.abs(pos_y_tp1-goal_y) <= goal_dist_thresh)
rew_t = -gamma*dist_to_goal_tp1 + dist_to_goal_t
rew_t += crash_rew_penalty * tf.cast(crashed_t, tf.float32)
rew_t += succ_rew_bonus * tf.cast(tf.logical_and(tf.logical_not(crashed_t), succed_t), tf.float32)
rew_t = tf.squeeze(rew_t)
rew_ts.append(rew_t)
rew_t = tf.stack(rew_ts, axis=0)
target_t = rew_t + gamma * v_tp1
# +
constraint_batch_idxes = tf.reshape(
tf.range(0, constraint_batch_size_ph, 1),
[constraint_batch_size_ph, 1])
q_t = tf.stack([tf.gather_nd(
build_mlp(
extract_task(constraint_obs_t_ph, train_task_idx),
n_act_dim, q_scope+'-'+str(train_task_idx),
n_layers=q_n_layers, size=q_layer_size,
activation=q_activation, output_activation=q_output_activation,
reuse=True
),
tf.concat([
constraint_batch_idxes,
tf.expand_dims(extract_task(constraint_act_t_ph, train_task_idx), 1)], axis=1)
) for train_task_idx in range(n_train_tasks)], axis=0)
# -
td_err = q_t - target_t
sq_td_err = tf.reduce_mean(td_err**2)
loss = neg_avg_log_likelihood + sq_td_err_penalty * sq_td_err
update_op = tf.train.AdamOptimizer(learning_rate).minimize(loss)
# +
samp_obs_t_ph = tf.placeholder(tf.float32, [None, n_obs_feats])
samp_act_t_feats_ph = tf.placeholder(tf.float32, [None, n_act_feats])
samp_q_t = [build_mlp(
samp_obs_t_ph,
n_act_dim, q_scope+'-'+str(train_task_idx),
n_layers=q_n_layers, size=q_layer_size,
activation=q_activation, output_activation=q_output_activation,
reuse=True
) for train_task_idx in range(n_train_tasks)]
samp_obs_tp1 = tf.transpose(tf.matmul(int_dyn_A, tf.transpose(
samp_obs_t_ph)) + tf.matmul(int_dyn_B, tf.transpose(
samp_act_t_feats_ph)))
# +
def make_val_assisted_env():
test_goal = np.random.random(2)
test_reward_func = make_reward_func(test_goal)
test_aristotle_pilot_policy = make_aristotle_pilot_policy(test_goal, denoise=True)
env = PointMassNav(using_inertia=True, reward_func=test_reward_func, goal=test_goal)
return test_aristotle_pilot_policy, env
def compute_assisted_perf():
assisted_rollouts = [[] for _ in range(n_val_eval_rollouts)]
test_aristotle_pilot_policies, envs = zip(*[make_val_assisted_env(
) for _ in range(n_val_eval_rollouts)])
obses = np.array([env.reset() for env in envs])
dones = [False for _ in envs]
prev_obses = obses
for step_idx in range(max_ep_len+1):
not_done_idxes = [i for i, done in enumerate(dones) if not done]
if not_done_idxes == []:
break
act_feats = np.array([featurize_act(
test_aristotle_pilot_policies[i](obses[i])) for i in not_done_idxes])
obs_feats = np.array(
[featurize_obs(obses[i]) for i in not_done_idxes])
feed_dict = {
samp_obs_t_ph: obs_feats,
samp_act_t_feats_ph: act_feats
}
intended_obses = sess.run(samp_obs_tp1, feed_dict=feed_dict)
intended_actions = [inverse_real_dyn(
obs_feats[i], intended_obses[i]) for i in range(len(not_done_idxes))]
for i, env_idx in enumerate(not_done_idxes):
action = intended_actions[i]
obs, r, done, info = envs[env_idx].step(action)
obses[env_idx] = obs
dones[env_idx] = done
assisted_rollouts[env_idx].append((prev_obses[env_idx], None, r, obs, float(done), None))
prev_obses = copy(obses)
assisted_rew = np.mean([sum(x[2] for x in r) for r in assisted_rollouts])
assisted_succ = np.mean([1 if is_succ(r) else 0 for r in assisted_rollouts])
assisted_crash = np.mean([1 if is_crash(r) else 0 for r in assisted_rollouts])
assisted_perf = {
'assisted_rew': assisted_rew,
'assisted_succ': assisted_succ,
'assisted_crash': assisted_crash
}
return assisted_perf
# -
int_dyn_A_true = np.zeros((n_obs_feats, n_obs_feats))
int_dyn_A_true[[0, 0, 1, 1], [0, 2, 1, 3]] = 1
int_dyn_B_true = np.zeros((n_obs_feats, 2))
int_dyn_B_true[[0, 1], [0, 1]] = 1
def compute_int_dyn_err():
int_dyn_A_eval = sess.run(int_dyn_A)
int_dyn_B_eval = sess.run(int_dyn_B)
return {'int_dyn_err':
np.linalg.norm(int_dyn_A_true - int_dyn_A_eval) + np.linalg.norm(
int_dyn_B_true - int_dyn_B_eval)}
def sample_constraints(_):
constraint_rollouts = [[] for _ in range(n_train_tasks)]
for train_task_idx in range(n_train_tasks):
rollouts = [[] for _ in range(n_constraint_rollouts_per_env)]
envs = [copy(train_newton_envs[train_task_idx]) for _ in range(n_constraint_rollouts_per_env)]
obses = np.array([env.reset() for env in envs])
dones = [False for _ in envs]
prev_obses = obses
for step_idx in range(max_ep_len+1):
not_done_idxes = [i for i, done in enumerate(dones) if not done]
batch_size = len(not_done_idxes)
if batch_size == 0:
break
actions = np.random.choice(n_act_dim, batch_size)
for i, env_idx in enumerate(not_done_idxes):
env = envs[env_idx]
action = actions[i]
obs, r, done, info = env.step(action)
obses[env_idx] = obs
dones[env_idx] = done
rollouts[env_idx].append((prev_obses[env_idx], action))
prev_obses = copy(obses)
constraint_rollouts[train_task_idx].extend([r for r in rollouts if r != []])
size = min(sum(len(r) for r in rollouts) for rollouts in constraint_rollouts)
global train_constraint_example_idxes
global val_constraint_batch
global constraint_obs_t
global constraint_act_t
global constraint_act_t_feats
constraint_obs_t = np.zeros((n_train_tasks, size, n_obs_feats))
constraint_act_t = np.zeros((n_train_tasks, size))
constraint_act_t_feats = np.zeros((n_train_tasks, size, n_act_feats))
for train_task_idx in range(n_train_tasks):
unfeat_obses, actions = list(zip(*sum(
constraint_rollouts[train_task_idx], [])))
obses = [featurize_obs(s) for s in unfeat_obses]
act_feats = [featurize_act(a) for a in actions]
idxes = random.sample(list(range(len(obses))), size)
constraint_obs_t[train_task_idx, :, :] = np.array(obses)[idxes, :]
constraint_act_t[train_task_idx, :] = np.array(actions)[idxes]
constraint_act_t_feats[train_task_idx, :, :] = np.array(act_feats)[idxes, :]
constraint_example_idxes = list(range(size))
random.shuffle(constraint_example_idxes)
n_train_constraint_examples = int(0.9 * size)
train_constraint_example_idxes = constraint_example_idxes[:n_train_constraint_examples]
val_constraint_example_idxes = constraint_example_idxes[n_train_constraint_examples:]
val_constraint_batch = constraint_obs_t[:, val_constraint_example_idxes], constraint_act_t[:, val_constraint_example_idxes], constraint_act_t_feats[:, val_constraint_example_idxes]
def sample_constraint_batch(size):
global n_iters_since_prev_constraint_sample
if n_iters_since_prev_constraint_sample % constraint_sampling_freq == 0:
sample_constraints(size)
n_iters_since_prev_constraint_sample = 0
n_iters_since_prev_constraint_sample += 1
idxes = random.sample(train_constraint_example_idxes, size)
constraint_batch = constraint_obs_t[:, idxes], constraint_act_t[:, idxes], constraint_act_t_feats[:, idxes]
return constraint_batch
train_constraint_example_idxes = None
val_constraint_batch = None
constraint_obs_t = None
constraint_act_t = None
constraint_act_t_feats = None
n_iters_since_prev_constraint_sample = 0
tf.global_variables_initializer().run(session=sess)
n_iters = iterations * demo_obs.shape[1] // batch_size
train_logs = {
'loss_evals': [],
'nll_evals': [],
'ste_evals': [],
'val_loss_evals': [],
'val_nll_evals': [],
'val_ste_evals': [],
'assisted_rew_evals': [],
'assisted_succ_evals': [],
'assisted_crash_evals': [],
'int_dyn_err_evals': []
}
def compute_batch_loss(demo_batch, constraint_batch, step=False, t=None):
demo_batch_obs_t, demo_batch_act_t = demo_batch
constraint_batch_obs_t, constraint_batch_act_t, constraint_batch_act_t_feats = constraint_batch
feed_dict = {
demo_obs_t_ph: demo_batch_obs_t,
demo_act_t_ph: demo_batch_act_t,
demo_batch_size_ph: demo_batch_obs_t.shape[1],
constraint_obs_t_ph: constraint_batch_obs_t,
constraint_act_t_ph: constraint_batch_act_t,
constraint_act_t_feats_ph: constraint_batch_act_t_feats,
constraint_batch_size_ph: constraint_batch_obs_t.shape[1],
}
[loss_eval, neg_avg_log_likelihood_eval, sq_td_err_eval] = sess.run(
[loss, neg_avg_log_likelihood, sq_td_err], feed_dict=feed_dict)
if step:
sess.run(update_op, feed_dict=feed_dict)
d = {
'loss': loss_eval,
'nll': neg_avg_log_likelihood_eval,
'ste': sq_td_err_eval
}
if not step:
d.update(compute_int_dyn_err())
d.update(compute_assisted_perf())
return d
val_log = None
while len(train_logs['loss_evals']) < n_iters:
demo_batch = sample_batch(batch_size)
constraint_batch = sample_constraint_batch(constraint_batch_size)
t = len(train_logs['loss_evals'])
train_log = compute_batch_loss(demo_batch, constraint_batch, step=True, t=t)
if val_log is None or len(train_logs['loss_evals']) % val_update_freq == 0:
val_log = compute_batch_loss(val_demo_batch, val_constraint_batch, step=False, t=t)
print('%d %d %f %f %f %f %f %f %f' % (
t, n_iters, train_log['loss'],
train_log['nll'], train_log['ste'], val_log['loss'],
val_log['nll'], val_log['ste'], val_log['int_dyn_err'])
)
for k, v in train_log.items():
train_logs['%s_evals' % k].append(v)
for k, v in val_log.items():
train_logs['%s%s_evals' % ('val_' if k in ['loss', 'nll', 'ste'] else '', k)].append(v)
for k in ['val_nll_evals', 'val_ste_evals']:
plt.xlabel('Iterations')
plt.ylabel(k.split('_')[1])
plt.plot(train_logs[k])
plt.show()
plt.xlabel('Iterations')
plt.ylabel('Reward')
plt.axhline(y=np.mean(ideal_rew), linestyle='--', color='teal', label='Optimal')
plt.axhline(y=np.mean(unassisted_rew), linestyle=':', color='gray', label='Unassisted')
plt.plot(train_logs['assisted_rew_evals'], color='orange', label='Assisted')
plt.legend(loc='best')
plt.show()
plt.xlabel('Iterations')
plt.ylabel('Success Rate')
plt.axhline(y=np.mean(ideal_succ), linestyle='--', color='teal', label='Optimal')
plt.axhline(y=np.mean(unassisted_succ), linestyle=':', color='gray', label='Unassisted')
plt.plot(train_logs['assisted_succ_evals'], color='orange', label='Assisted')
plt.ylim([-0.05, 1.05])
plt.legend(loc='best')
plt.show()
plt.xlabel('Iterations')
plt.ylabel('Crash Rate')
plt.axhline(y=np.mean(ideal_crash), linestyle='--', color='teal', label='Optimal')
plt.axhline(y=np.mean(unassisted_crash), linestyle=':', color='gray', label='Unassisted')
plt.plot(train_logs['assisted_crash_evals'], color='orange', label='Assisted')
plt.ylim([-0.05, 1.05])
plt.legend(loc='best')
plt.show()
plt.xlabel('Iterations')
plt.ylabel('L2 Error')
plt.plot(train_logs['int_dyn_err_evals'], color='orange')
plt.ylim([-0.05, None])
plt.show()
print(sess.run(int_dyn_A))
print(sess.run(int_dyn_B))
# repeat with ten different random seeds
master_train_logs = []
for _ in range(10):
train_constraint_example_idxes = None
val_constraint_batch = None
constraint_obs_t = None
constraint_act_t = None
constraint_act_t_feats = None
n_iters_since_prev_constraint_sample = 0
tf.global_variables_initializer().run(session=sess)
n_iters = 20000
train_logs = {
'loss_evals': [],
'nll_evals': [],
'ste_evals': [],
'val_loss_evals': [],
'val_nll_evals': [],
'val_ste_evals': [],
'assisted_rew_evals': [],
'assisted_succ_evals': [],
'assisted_crash_evals': [],
'int_dyn_err_evals'
}
val_log = None
while len(train_logs['loss_evals']) < n_iters:
demo_batch = sample_batch(batch_size)
constraint_batch = sample_constraint_batch(constraint_batch_size)
t = len(train_logs['loss_evals'])
train_log = compute_batch_loss(demo_batch, constraint_batch, step=True, t=t)
if val_log is None or t % val_update_freq == 0:
val_log = compute_batch_loss(val_demo_batch, val_constraint_batch, step=False, t=t)
if t % 1000 == 0:
print('%d %d %f %f %f %f %f %f %f' % (
t, n_iters, train_log['loss'],
train_log['nll'], train_log['ste'], val_log['loss'],
val_log['nll'], val_log['ste'], val_log['int_dyn_err'])
)
for k, v in train_log.items():
train_logs['%s_evals' % k].append(v)
for k, v in val_log.items():
train_logs['%s%s_evals' % ('val_' if k in ['loss', 'nll', 'ste'] else '', k)].append(v)
master_train_logs.append(train_logs)
with open(os.path.join(data_dir, 'master_train_logs.pkl'), 'wb') as f:
pickle.dump(master_train_logs, f, pickle.HIGHEST_PROTOCOL)
# internal2real dynamics transfer
newton_env = train_newton_envs[0].unwrapped
def inverse_real_dyn(state, next_state, vel_thresh=accel):#=1e-9):#
pos = state[:2]
vel = state[2:]
next_states = np.array([newton_env._obs_of_pos_vel(*newton_env._next_pos_vel(pos, vel, a)) for a in range(n_act_dim)])
if (np.abs(state[2:]) <= vel_thresh).all():
dists = np.linalg.norm(next_state[:2] - next_states[:, :2], axis=1)
else:
dists = np.linalg.norm(next_state[2:] - next_states[:, 2:], axis=1)
return np.argmax(-dists)
def dyn_transfer(state, action):
act_feats = np.array([featurize_act(action)])
obs_feats = np.array([featurize_obs(state)])
feed_dict = {
samp_obs_t_ph: obs_feats,
samp_act_t_feats_ph: act_feats
}
next_state = sess.run(samp_obs_tp1, feed_dict=feed_dict)[0]
return inverse_real_dyn(state, next_state)
def make_assisted_env(goal=None):
test_goal = np.random.random(2) if goal is None else goal
test_reward_func = make_reward_func(test_goal)
test_aristotle_pilot_policy = make_aristotle_pilot_policy(test_goal, denoise=True)
env = PointMassNav(reward_func=test_reward_func, goal=test_goal, using_inertia=True)
env.unwrapped._step_orig = env.unwrapped._step
def _step(self, action):
transferred_act = dyn_transfer(self.curr_obs, action)
obs, r, done, info = self._step_orig(transferred_act)
return obs, r, done, info
env.unwrapped._step = types.MethodType(_step, env.unwrapped)
return test_aristotle_pilot_policy, env
def make_env_without_dyn_transfer(using_inertia=True, goal=None):
test_goal = np.random.random(2) if goal is None else goal
test_reward_func = make_reward_func(test_goal)
test_aristotle_pilot_policy = make_aristotle_pilot_policy(test_goal, denoise=True)
unassisted_env = PointMassNav(using_inertia=using_inertia, reward_func=test_reward_func, goal=test_goal)
return test_aristotle_pilot_policy, unassisted_env
make_unassisted_env = lambda: make_env_without_dyn_transfer(using_inertia=True)
make_ideal_env = lambda: make_env_without_dyn_transfer(using_inertia=False)
n_eval_rollouts = 1000
assisted_rollouts = [run_ep(*make_assisted_env(), render=False) for _ in range(n_eval_rollouts)]
with open(os.path.join(data_dir, 'aristotle_pilot_policy_assisted_rollouts.pkl'), 'wb') as f:
pickle.dump(assisted_rollouts, f, pickle.HIGHEST_PROTOCOL)
with open(os.path.join(data_dir, 'aristotle_pilot_policy_assisted_rollouts.pkl'), 'rb') as f:
assisted_rollouts = pickle.load(f)
unassisted_rollouts = [run_ep(*make_unassisted_env(), render=False) for _ in range(n_eval_rollouts)]
with open(os.path.join(data_dir, 'aristotle_pilot_policy_unassisted_rollouts.pkl'), 'wb') as f:
pickle.dump(unassisted_rollouts, f, pickle.HIGHEST_PROTOCOL)
with open(os.path.join(data_dir, 'aristotle_pilot_policy_unassisted_rollouts.pkl'), 'rb') as f:
unassisted_rollouts = pickle.load(f)
ideal_rollouts = [run_ep(*make_ideal_env(), render=False) for _ in range(n_eval_rollouts)]
with open(os.path.join(data_dir, 'aristotle_pilot_policy_ideal_rollouts.pkl'), 'wb') as f:
pickle.dump(ideal_rollouts, f, pickle.HIGHEST_PROTOCOL)
with open(os.path.join(data_dir, 'aristotle_pilot_policy_ideal_rollouts.pkl'), 'rb') as f:
ideal_rollouts = pickle.load(f)
unassisted_rew = [sum(x[2] for x in r) for r in unassisted_rollouts]
ideal_rew = [sum(x[2] for x in r) for r in ideal_rollouts]
assisted_rew = [sum(x[2] for x in r) for r in assisted_rollouts]
np.mean(unassisted_rew), np.mean(ideal_rew)
np.mean(assisted_rew)
unassisted_succ = [1 if is_succ(r) else 0 for r in unassisted_rollouts]
ideal_succ = [1 if is_succ(r) else 0 for r in ideal_rollouts]
assisted_succ = [1 if is_succ(r) else 0 for r in assisted_rollouts]
np.mean(unassisted_succ), np.mean(ideal_succ)
np.mean(assisted_succ)
unassisted_crash = [1 if is_crash(r) else 0 for r in unassisted_rollouts]
ideal_crash = [1 if is_crash(r) else 0 for r in ideal_rollouts]
assisted_crash = [1 if is_crash(r) else 0 for r in assisted_rollouts]
np.mean(unassisted_crash), np.mean(ideal_crash)
np.mean(assisted_crash)
# viz trajectories
def plot_trajectories(
rollouts, goal, title, file_name=None):
plt.title(title)
for rollout in rollouts:
trajectory = [x[0] for x in rollout] + [rollout[-1][3]]
x, y, vx, vy = list(zip(*trajectory))
if is_succ(rollout):
cmap = mpl.cm.YlGn
elif is_crash(rollout):
cmap = mpl.cm.YlOrRd
else:
cmap = mpl.cm.gray
plt.scatter(x, y, c=range(len(x)), cmap=cmap, alpha=0.75, linewidth=0)
plt.scatter(
[goal[0]], [goal[1]], marker='*', color='yellow',
edgecolor='black', linewidth=1, s=300, alpha=0.5)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xticks([])
plt.yticks([])
plt.axis('off')
if file_name is not None:
plt.savefig(os.path.join(data_dir, file_name), bbox_inches='tight')
plt.show()
n_viz_rollouts = 100
center_goal = np.array([0.5, 0.5])
test_aristotle_pilot_policy, assisted_env = make_assisted_env(goal=center_goal)
assisted_rollouts = [run_ep(
test_aristotle_pilot_policy, assisted_env, render=False) for _ in range(n_viz_rollouts)]
test_aristotle_pilot_policy, unassisted_env = make_env_without_dyn_transfer(
using_inertia=True, goal=center_goal)
unassisted_rollouts = [run_ep(
test_aristotle_pilot_policy, unassisted_env, render=False) for _ in range(n_viz_rollouts)]
unassisted_rollouts_sample = random.sample(unassisted_rollouts, 10)
mpl.rcParams.update({'font.size': 20})
plot_trajectories(
unassisted_rollouts_sample, center_goal, 'Unassisted', 'unassisted-traj.pdf')
assisted_rollouts_sample = random.sample(assisted_rollouts, 20)
plot_trajectories(assisted_rollouts_sample, center_goal, 'Assisted', 'assisted-traj.pdf')
run_ep(test_aristotle_pilot_policy, assisted_env, render=True)
assisted_env.close()
# viz master logs
with open(os.path.join(data_dir, 'master_train_logs.pkl'), 'rb') as f:
master_train_logs = pickle.load(f)
def err_vs_iter_of_logs(master_train_logs):
n_reps = len(master_train_logs)
max_iter = max(len(
train_logs['int_dyn_err_evals']) for train_logs in master_train_logs)
R = np.zeros((n_reps, max_iter))
R[:, :] = np.nan
for i, train_logs in enumerate(master_train_logs):
errs = train_logs['int_dyn_err_evals']
R[i, :len(errs)] = errs
return R
smooth_win = 100
def moving_avg(d, n=smooth_win):
s = np.concatenate((np.zeros(1), np.cumsum(d).astype(float)))
return (s[n:] - s[:-n]) / n
traj_col_means = lambda x: np.nanmean(x, axis=0)
traj_col_stderrs = lambda x: np.nanstd(x, axis=0) / np.sqrt(
np.count_nonzero(~np.isnan(x), axis=0))
r_mins = lambda x: traj_col_means(x) - traj_col_stderrs(x)
r_maxs = lambda x: traj_col_means(x) + traj_col_stderrs(x)
R = err_vs_iter_of_logs(master_train_logs)
def plot_fill(R, color, label):
x = range(R.shape[1] - (smooth_win - 1))
y1 = moving_avg(r_mins(R), n=smooth_win)
y2 = moving_avg(r_maxs(R), n=smooth_win)
plt.fill_between(
x, y1, y2, where=y2 >= y1, interpolate=True, facecolor=color, alpha=0.5)
plt.plot(moving_avg(traj_col_means(R), n=smooth_win), color=color, label=label)
# +
plt.xlabel('Number of Gradient Steps')
plt.ylabel('Internal Dynamics L2 Error')
plt.title('2D Continuous-State Navigation')
plot_fill(R, 'orange', 'Our Method')
plt.axhline(y=0.25, linestyle='--', color='gray', label='Random')
plt.ylim([-0.05, None])
plt.xlim([0, 10000])
plt.legend(loc='best')
plt.savefig(os.path.join(data_dir, 'err-vs-iter.pdf'), bbox_inches='tight')
plt.show()
# -
| 3.0-continuous-ime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A. Random integers
# Similar to the Python random module, NumPy has its own submodule for pseudo-random number generation called np.random. It provides all the necessary randomized operations and extends it to multi-dimensional arrays. To generate pseudo-random integers, we use the np.random.randint function.
#
# The code below shows example usages of np.random.randint.
# +
import numpy as np
print(np.random.randint(5))
print(np.random.randint(5))
print(np.random.randint(5, high=6))
random_arr = np.random.randint(-3, high=14,
size=(2, 2))
print(repr(random_arr))
# -
# # Explanation
# The np.random.randint function takes in a single required argument, which actually depends on the high keyword argument. If high=None (which is the default value), then the required argument represents the upper (exclusive) end of the range, with the lower end being 0. Specifically, if the required argument is n, then the random integer is chosen uniformly from the range [0, n).
#
# If high is not None, then the required argument will represent the lower (inclusive) end of the range, while high represents the upper (exclusive) end.
#
# The size keyword argument specifies the size of the output array, where each integer in the array is randomly drawn from the specified range. As a default, np.random.randint returns a single integer.
# # B. Utility functions
# Some fundamental utility functions from the np.random module are np.random.seed and np.random.shuffle. We use the np.random.seed function to set the random seed, which allows us to control the outputs of the pseudo-random functions. The function takes in a single integer as an argument, representing the random seed.
#
# The code below uses np.random.seed with the same random seed. Note how the outputs of the random functions in each subsequent run are identical when we set the same random seed.
# +
np.random.seed(1)
print(np.random.randint(10))
random_arr = np.random.randint(3, high=100,
size=(2, 2))
print(repr(random_arr))
# New seed
np.random.seed(2)
print(np.random.randint(10))
random_arr = np.random.randint(3, high=100,
size=(2, 2))
print(repr(random_arr))
# Original seed
np.random.seed(1)
print(np.random.randint(10))
random_arr = np.random.randint(3, high=100,
size=(2, 2))
print(repr(random_arr))
# -
# The np.random.shuffle function allows us to randomly shuffle an array. Note that the shuffling happens in place (i.e. no return value), and shuffling multi-dimensional arrays only shuffles the first dimension.
#
# The code below shows example usages of np.random.shuffle. Note that only the rows of matrix are shuffled (i.e. shuffling along first dimension only).
# +
vec = np.array([1, 2, 3, 4, 5])
np.random.shuffle(vec)
print(repr(vec))
np.random.shuffle(vec)
print(repr(vec))
matrix = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
np.random.shuffle(matrix)
print(repr(matrix))
# -
# # C. Distributions
# Using np.random we can also draw samples from probability distributions. For example, we can use np.random.uniform to draw pseudo-random real numbers from a uniform distribution.
#
# The code below shows usages of np.random.uniform.
print(np.random.uniform())
print(np.random.uniform(low=-1.5, high=2.2))
print(repr(np.random.uniform(size=3)))
print(repr(np.random.uniform(low=-3.4, high=5.9,
size=(2, 2))))
# The function np.random.uniform actually has no required arguments. The keyword arguments, low and high, represent the inclusive lower end and exclusive upper end from which to draw random samples. Since they have default values of 0.0 and 1.0, respectively, the default outputs of np.random.uniform come from the range [0.0, 1.0).
#
# The size keyword argument is the same as the one for np.random.randint, i.e. it represents the output size of the array.
#
# Another popular distribution we can sample from is the normal (Gaussian) distribution. The function we use is np.random.normal.
#
# The code below shows usages of np.random.normal.
print(np.random.normal())
print(np.random.normal(loc=1.5, scale=3.5))
print(repr(np.random.normal(loc=-2.4, scale=4.0,
size=(2, 2))))
# Like np.random.uniform, np.random.normal has no required arguments. The loc and scale keyword arguments represent the mean and standard deviation, respectively, of the normal distribution we sample from.
#
# NumPy provides quite a few more built-in distributions, which are listed here.
#
# D. Custom sampling
# While NumPy provides built-in distributions to sample from, we can also sample from a custom distribution with the np.random.choice function.
#
# The code below shows example usages of np.random.choice.
colors = ['red', 'blue', 'green']
print(np.random.choice(colors))
print(repr(np.random.choice(colors, size=2)))
print(repr(np.random.choice(colors, size=(2, 2),
p=[0.8, 0.19, 0.01])))
# The required argument for np.random.choice is the custom distribution we sample from. The p keyword argument denotes the probabilities given to each element in the input distribution. Note that the list of probabilities for p must sum to 1.
#
# In the example, we set p such that 'red' has a probability of 0.8 of being chosen, 'blue' has a probability of 0.19, and 'green' has a probability of 0.01. When p is not set, the probabilities are equal for each element in the distribution (and sum to 1).
| 1.Data Manipulation with NumPy/4.Random Numbers Operation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/julianovale/MCDA/blob/main/0007_TOPSIS_exemplo_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="2LU7hLBWugzn" outputId="2c9f5173-bb8c-4617-eb01-448a7328c3be"
# Filename: TOPSIS_example_1.py
# Description: Application of TOPSIS in the first facility
# location problem of Chapter 1
# Authors: <NAME>. & <NAME>.
from numpy import *
# performances of the alternatives
x = array([[8, 7, 2, 1], [5, 3, 7, 5], [7, 5, 6, 4],
[9, 9, 7, 3], [11, 10, 3, 7], [6, 9, 5, 4]])
# weights of the criteria
weights = array([0.4, 0.3, 0.1, 0.2])
# Step 1 (vector normalization): cumsum() produces the
# cumulative sum of the values in the array and can also
# be used with a second argument to indicate the axis to use
col_sums = array(cumsum(x**2, 0))
norm_x = array([[round(x[i, j] / sqrt(col_sums[x.shape[0]
- 1, j]), 3) for j in range(4)] for i in range(6)])
# Step 2 (Multiply each evaluation by the associated weight):
# wnx is the weighted normalized x matrix
wnx = array([[round(weights[i] * norm_x[j, i], 3)
for i in range(4)] for j in range(6)])
# Step 3 (positive and negative ideal solution)
pis = array([amax(wnx[:, :1]), amax(wnx[:, 1:2]),
amax(wnx[:, 2:3]), amax(wnx[:, 3:4])])
nis = array([amin(wnx[:, :1]), amin(wnx[:, 1:2]),
amin(wnx[:, 2:3]), amin(wnx[:, 3:4])])
# Step 4a: determine the distance to the positive ideal
# solution (dpis)
b1 = array([[(wnx[j, i] - pis[i])**2 for i in range(4)]
for j in range(6)])
dpis = sqrt(sum(b1, 1))
# Step 4b: determine the distance to the negative ideal
# solution (dnis)
b2 = array([[(wnx[j, i] - nis[i])**2 for i in range(4)]
for j in range(6)])
dnis = sqrt(sum(b2, 1))
# Step 5: calculate the relative closeness to the ideal
# solution
final_solution = array([round(dnis[i] / (dpis[i] + dnis[i]),
3) for i in range(6)])
print("Closeness coefficient = ", final_solution)
| 0007_TOPSIS_exemplo_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Some important admin and homework for next time.
#
#
# 1. __Important point number 1__. We are reaching the mid-term of the semester and it is time for some reflection :) I want to know from you guys how the class is going. This is super important for me to make sure that the second half of the semester goes well. Before working on the material for today, please __fill [the mid-term survery ](https://docs.google.com/forms/d/e/1FAIpQLSc79NAmXoPfYjmVyVgCDUdyK8narAOuR-CmgYhQM6h3vRZfuA/viewform?usp=sf_link)__ on Google Forms. I promise it should not take more than 5 minutes.
#
#
# 2. __Important point number 2__. We are slowly approaching the time when you guys will work on your own project. There are still a couple of classes to go, but by now you have a good idea of the methods and types of questions we work with in this class. And I would like to start discussing your ideas. Make sure you __complete Exercise 5 (at the end of this notebook) before next Wednesday__. We will take some time to talk about it next time.
#
# # Overview of today's class.
#
# This week's curriculum is about text analysis. The overview is
#
# * Tricks for raw text (NLPP Chapter 3) and finding the important words in a document (TF-IDF)
# * Apply these tricks to study the content of submissions
#
# In the first part, we will take a quick tour of NLPP's chapter 3, which is boring, but an amazing resource that you'll keep returning to. Then we'll talk about how we can use simple statistics to get text to show us what it's all about. We will even do a little visualization.
#
# # Part 1 - Processing real text (from out on the inter-webs)
#
# Ok. So Chapter 3 in NLPP is all about working with text from the real world. Getting text from this internet, cleaning it, tokenizing, modifying (e.g. stemming, converting to lower case, etc) to get the text in shape to work with the NLTK tools you've already learned about - and many more. In the process we'll learn more about regular expressions, as well as unicode; something we've already been struggling with a little bit will now be explained in more detail.
# >
# > **Video lecture**: Short overview of chapter 3 + a few words about kinds of language processing that we don't address in this class.
# >
from IPython.display import YouTubeVideo
YouTubeVideo("Rwakh-HXPJk",width=800, height=450)
# > *Reading*: NLPP Chapter 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.9, and 3.10\. It's not important that you go in depth with everything here - the key thing is that you *know that Chapter 3 of this book exists*, and that it's a great place to return to if you're ever in need of an explanation of regular expressions, unicode, or other topics that you forget as soon as you stop using them (and don't worry, I forget about those things too).
# > _Exercise_ 1: Just a couple of examples from the book: Work through the exercises NLPP 3.12: 6, 30.
# ### Prelude to part 2. - Data.
#
# In the follwing exercises, we will study the text contained in _r/wallstreebets_ submissions. To make things a bit more exciting, we will work with \**all** the submissions posted in 2020 in _r/wallstreebets_. As you may well guess, we will need both the title and the content of each submission.
#
# To make things a bit less tedious for you guys, I downloaded and made avaialble the data you need (you can find it [here](https://github.com/lalessan/comsocsci2021/blob/master/data/wallstreet_subs.csv.gz)). The dataset consists of all the submissions posted between January 1st and December 31st 2020 with content in English. For each submission, you have the following information: timestamp of creation (__created_utc__), __title__, textual content (__selftext__), and __score__. You are welcome to use this data. If you prefer to download your own (see optional exercise below), that's even better!! As usual, I do not expect you to find a perfect match between your data and mine. In the exercises below, I refer to this data as the "_wallstreetbets submissions dataset_".
#
# _Exercise (Optional)_:
#
# > * Download all submissions posted on _r/wallstreetbets_ in 2020 using [psaw](https://pypi.org/project/psaw/).
# > * For each submission, keep the title, the textual content, the score, the author, and the time of creation.
# > * Remove submissions whose content has been removed, and those that are not in English. You can use the library [langdetect](https://pypi.org/project/langdetect/) to detect the language of a given text.
#
#
# # Part 2 - Words that characterize stocks discussed on r/wallstreetbets
#
# In this section, we'll begin to play around with how far we can get with simple strategies for looking at text. The video is Sune talking about a fun paper, which shows you how little is needed in order to reveal something very interesting about humans that produce text. Then, we'll use a very simple weighting scheme called TF-IDF to find the words in the reddit r/wallstreetbets submissions that charachterize different stocks. In cleaning the Reddit submissions, we'll use some of the stuff you've just read about above. Finally, we'll even visualize them in a fun little word cloud (below is what I found for the discussions around Gamestop, Microsoft, and Tesla). The wordclouds may not be immediately understandable. But if you do some research on the important words, you will find that the TF-IDF method extracts quite interesting information.
#
# <img src="https://github.com/lalessan/comsocsci2021/blob/master/files/wordclouds.png?raw=true" alt="Drawing" style="width: 1000px;"/>
#
# > **Video lecture**: Simple methods reveal a lot. Sune talks a little bit about the paper: [Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0073791).
YouTubeVideo("wkYvdfkVmlI",width=800, height=450)
# _Exercise 2: Most discussed stocks in r/wallstreebets_. GME is only one among many stocks people have discussed in _r/wallstreetbets_. In this exercise, we will find the most discussed stocks on _wallstreetbets_. Stocks are identified by their [Ticker symbol](https://en.wikipedia.org/wiki/Ticker_symbol). A Ticker symbol is nothing but a string consisting of letters and numbers, and is typically quite short. For example the Gamestop symbol is _GME_, Amazon is _AMZN_, Alphabet is _GOOGL_...
#
# > 1. To talk about a specific stock, Redditors often use the corresponding ticker symbol [preceded by the dollar sign](https://www.reddit.com/r/wallstreetbets/comments/5yvvue/why_do_you_put_a_dollar_sign_in_front_of_a_ticker/) (\\$GME, \$AMZN...). Write down a [Regular Expression](https://en.wikipedia.org/wiki/Regular_expression) matching words that begin with a dollar sign "\\$". See [NLPP book, section 3.4]().
# > 2. Load the _wallstreetbets submission dataset_ as a Pandas DataFrame and create a new column containing both the title and the textual content of each submission (as one long string). We refer to this as the _text_ of the submission.
# > 3. For each submission, find all ticker symbols (those preceded by a dollar sign) contained in the _text_. Use the function [re.findall](https://docs.python.org/3/library/re.html), and the regular expression you created in point 1). Some tips for success:
# > > * Remove matches that are definetly not stock symbols (for example amounts like: \\$100, \$1000k).
# > > * Convert all matches to uppercase
# > > * Remove the dollar sign at the beginning of the symbol (e.g. \\$gme → GME).
# > 4. Create a list containing the top 15 Ticker Symbols by number of occurrences. GME should be among them. If it is not, check again your analysis and/or come talk to me. Google the top 15 symbols and find the corresponding company names. Are they known companies or not?
import pandas as pd
import re
import nltk
filename = "../data/wallstreet_subs.csv.gz"
submissions = pd.read_csv(filename, compression = 'gzip')
# +
regexp = r"[$][a-zA-Z]{1,}"
submissions["text"] = submissions.title + " " + submissions.selftext
ticker_list = []
for text in submissions.text:
tickers = [ticker[1:].upper() for ticker in re.findall(regexp, text) if not re.search("[0-9]", ticker)]
ticker_list.extend(tickers)
fdist = nltk.FreqDist(ticker_list)
print(fdist.most_common(15))
most_common,_ = zip(*fdist.most_common(15))
most_common = [w.lower() for w in most_common]
print(most_common)
# -
# _Exercise 3: TF-IDF and the stocks discussed on r/wallstreetbets._ The goal for this exercise is to find the words charachterizing each of the stocks discussed on r/wallstreetbets. We will focus on the top 15 stocks we idenfied in Exercise 2, and we will of course use TF-IDF.
#
#
# > 1. First, check out [the wikipedia page for TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf). Explain in your own words the point of TF-IDF.
# > * What does TF stand for?
# > * What does IDF stand for?
# >
# > 2. Tokenize the __text__ of each submission. Create a column __tokens__ in your dataframe containing the tokens. Remember the bullets below for success.
# > * If you dont' know what _tokenization_ means, go back and read Chapter 3 again. **The advice to go back and check Chapter 3 is valid for every cleaning step below**.
# > * Exclude punctuation.
# > * Exclude URLs
# > * Exclude stop words (if you don't know what stop words are, go back and read NLPP1e again).
# > * Exclude numbers (since they're difficult to interpret in the word cloud).
# > * Set everything to lower case.
# > * **Note** that none of the above has to be perfect. And there's some room for improvisation. You can try using stemming. In my own first run the results didn't look so nice, because some submissions repeat certain words again and again and again, whereas other are very short. For that reason, I decided to use the unique set of words from each submission rather than each word in proportion to how it's actually used. Choices like that are up to you.
# >
# > 3. Find submissions discussing at least one of the top 15 stocks you identified above. To do so:
# > > * Create a function that finds the intersection between a list of tokens and your list of top 15 stocks. For example, your function applied to the tokens: _"[Here, TSLA, submission, GME]"_ should return ["TSLA","GME"]. (_Optional_: you can also try to included cases in which the list of tokens contains a company name among your top 15. For example the function applied to _"[Here, Gamestop, submission]"_ could return ['GME'].)
# > > * Create a new column _stock_ in your DataFrame, containing the output of your function applied to the _text_ column. Values in this column should be lists.
# > > * Handle cases where one post discusses more than one stock by applying the function [__explode__](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html) to the _stock_ column. This will duplicate submissions associated to multiple stocks. After exploding, the values included in the _stock_ column should be strings.
# > > * Handle cases where none of the selected stocks is discussed by replacing Nan values, for example with "Other".
# >
# > 4. Now, we want to find out which words are important for each *stock*, so we're going to create several ***large documents, one for each stock***. Each document includes all the tokens related to the same stock. We will also have a document including discussions that do not relate to the top 15 stocks.
# > 5. Now, we're ready to calculate the TF for each word. Use the method of your choice to find the top 5 terms within __5 stocks of your choice__.
# > * Describe similarities and differences between the stocks.
# > * Why aren't the TFs not necessarily a good description of the stocks?
# > * Next, we calculate IDF for every word.
# > * What base logarithm did you use? Is that important?
# > 6. We're ready to calculate TF-IDF. Do that for the __5 stock of your choice__.
# > * List the 10 top TF words for each stock.
# > * List the 10 top TF-IDF words for each stock.
# > * Are these 10 words more descriptive of the stock? If yes, what is it about IDF that makes the words more informative?
from nltk import word_tokenize
from nltk.corpus import stopwords
from tqdm import tqdm
SW = stopwords.words("english")
T = []
submissions.text
from nltk import word_tokenize
from nltk.corpus import stopwords
from tqdm import tqdm
SW = stopwords.words("english")
T = []
for text in tqdm(submissions.text):
text = [word for word in text.split(" ")
if not re.search(r'^[https?:\/\/].*[\r\n]*', word)
and not re.search(r'/r/w', word)]
tokens = set([token.lower() for token in word_tokenize(" ".join(text))
if token not in SW
and token.isalpha()])
T.append(" ".join(list(tokens)))
submissions["tokens"] = T
# +
submissions["tokens"] = T
stock = []
for token in submissions.tokens:
intersection = [w.upper() for w in list(set.intersection(set(token.split(" ")), set(most_common)))]
if len(intersection) > 0:
stock.append(intersection)
else:
stock.append("Other")
submissions["stock"] = stock
submissions = submissions.explode("stock")
# -
for stock in most_common:
stock = stock.upper()
s = submissions['text'][submissions.stock == stock]
s.tolist()
# text = 'Here are all positions commented in the last 24 hours --------------------\nCALLS\n--------------------\n[ALLY 3/20 30c](/r/wallstreetbets/comments/fw48kt/remember_msft_417_200c_i_membah/fmm3to5/)\n\n[AMD 4/17 $43c](/r/wallstreetbets/comments/fvwu06/daily_discussion_thread_april_06_2020/fmls1ie/)\n\n[BA 250c 6/1](/r/wallstreetbets/comments/fvwu06/daily_discussion_thread_april_06_2020/fmle1d6/)\n\n[BA 4/9 $146c](/r/wallstreetbets/comments/fvwu06/daily_discussion_thread_april_06_2020/fmm4t3h/)\n\n[BA 4/9 150c](/r/wallstreetbets/comments/fvwu06/daily_discussion_thread_april_06_2020/fmmdl3o/)\n\n[COCK 420c 5/1](/r/wallstreetbets/comments/fvlczz/what_are_your_moves_tomorrow_april_06_2020/fmj5aih/)\n\n[GME 50c 5/20]'
# [w for w in word_tokenize(text) if w.isalpha()]
# _Exercise 4: The Wordcloud_. It's time to visualize our results!
#
# > * Install the [`WordCloud`](https://pypi.org/project/wordcloud/) module.
# > * Now, create word-cloud for each stock. Feel free to make it as fancy or non-fancy as you like.
# > * Comment on the results. Are these words to be expected? Is there anything that is surprising?
# _Exercise 5: A Study Project in Computational Social Science._
# > 1. Read the [Project Assignment](https://github.com/lalessan/comsocsci2021/wiki/Project-Assignment) page, where I explain how to set up a Study Project.
# > 2. Think of a topic of interest to your would like to study using data downloaded from the Web (Wikipedia, Twitter, Reddit, Facebook, Github, other data sources...), and some of the methods we have seen in this course.
# > 3. What is the topic?
# > 4. What is the data?
# > 5. Write down 3 research questions related to your topic that you would like to investigate.
# > 6. Put together 1 slide including the answers to points 3,4,5.
#
# __Important__: This will be by no means the final choice for your Project Assignment. All I want is for you guys to start thinking about it.
| lectures/Week6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2020 <NAME>
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# -
import os
import gzip
import requests
import numpy as np
import netCDF4 as nc
from collections import OrderedDict as OD
# #### Fetching the Location of a place in `Berlin, Germany`
#
# - Longitude of the location is - 13.3946
# - Latitute of the location is - 52.5246
# +
def create_dir(dir_name):
try:
os.mkdir(dir_name)
print(dir_name, "Dir Created")
except FileExistsError:
print(dir_name, "Dir Already exists")
return
def fetch_decompress(url_param, file_name):
url = 'https://opendata.dwd.de/climate_environment/CDC/grids_germany/monthly/Project_TRY/{}/{}.gz'.format(url_param,file_name)
with open('./{}/{}'.format(url_param, file_name), 'wb+') as f:
print('Fetching URL: ', url)
payload = requests.get(url)
print("Fetched", file_name)
dec_payload = gzip.decompress(payload.content)
print("Decompressed", file_name)
f.write(dec_payload)
print('Completed Writing', file_name)
return
def delete_file(url_param, file_name):
name = "./{}/{}".format(url_param, file_name)
if os.path.exists(name):
os.remove(name)
print("Deleted "+ name)
else:
print("The file " + name + " does not exist")
return
# -
def getclosest_ij(lats,lons,latpt,lonpt):
# find squared distance of every point on grid
dist_sq = (lats-latpt)**2 + (lons-lonpt)**2
# 1D index of minimum dist_sq element
minindex_flattened = dist_sq.argmin()
# Get 2D index for latvals and lonvals arrays from 1D index
return np.unravel_index(minindex_flattened, lats.shape)
def get_value(url_param, file_name, latpt, lonpt):
output_params = {'air_temperature_mean' : 'temperature',
'pressure' : 'SLP',
'wind_direction' : 'DD',
'wind_speed' : 'FF'}
output_variable = output_params[url_param]
with nc.Dataset('./{}/{}'.format(url_param, file_name)) as ncfile:
print("Opened netCDF: {}".format(file_name))
lon = ncfile.variables['lon']
lat = ncfile.variables['lat']
temp = ncfile.variables[output_variable]
y, x = getclosest_ij(lat[:], lon[:], latpt, lonpt)
value = str(temp[0, y, x])
print("Got the Value {} from {}".format(value, file_name))
return value
params = OD({'air_temperature_mean' : 'TT',
'pressure' : 'PRED',
'wind_direction' : 'DD',
'wind_speed' : 'FF'})
lon, lat = 13.3946, 52.5246
# +
start_year = 1995
end_year = 2004
start_month = 1
end_month = 12
csvfile = open('./dataset.csv', 'w+')
csvfile.write('time,')
csvfile.write(','.join([i for i in params]))
csvfile.write('\n')
for year in range(start_year, end_year+1):
for month in range(start_month, end_month+1):
value_list = []
value_list.append('{}{:02d}'.format(year, month))
for url_param, measurement_param in params.items():
print("Fetching {} of Year: {} of Month: {}".format(url_param, year, month))
file_name = '{}_{}{:02d}_monmean.nc'.format(measurement_param, year, month)
create_dir(url_param)
fetch_decompress(url_param, file_name)
value = get_value(url_param, file_name, lat, lon)
value_list.append(str(value))
value_list.append('\n')
csv_row = ','.join(value_list)
print(csv_row+"\n\n")
csvfile.write(csv_row)
delete_file(url_param, file_name)
# -
csvfile.close()
| datasources/create-dataset-monthly.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Name: <NAME>, ID: 1001405116
# # IMDB-sentiment Analysis Using Naive Bayes Classifier
# Test classification is done for the purpose of finding tags or catagories of the text according to their contents. In this analysis, the data set is a collection of 50,000 reviews from IMDB. I have taken the process data from https://www.kaggle.com/lakshmi25npathi/sentiment-analysis-of-imdb-movie-reviews/data and orginal data is available in here http://ai.stanford.edu/~amaas/data/sentiment/. The purpose of this analysis was exploring the naive bayes classification with text data.
# # Import the data and explore the contents
# Read The data
import pandas as pd
import numpy as np
from sklearn.naive_bayes import MultinomialNB
# +
# Import the data and see the data type
# -
data=pd.read_csv('C:/Users/mxm5116/Desktop/Data Mining/IMDB Dataset.csv')
data.head()
# Check the shape of the data
print(data.shape)
# Now lets, see the summary of the data set
data.describe()
# Check the positive and negative number of sentiment
data['sentiment'].value_counts()
# # a. Divide the dataset as train,and test¶ data sets
# # First clear and normalized the data and divide again as normalized train, and test data
# # Now clean the text
# Import library
from bs4 import BeautifulSoup
import re,string,unicodedata
# Removing the html strips
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
# Remove the square brackets
def remove_between_square_brackets(text):
return re.sub('\[[^]]*\]', '', text)
# Remoove the noisy text
def denoise_text(text):
text = strip_html(text)
text = remove_between_square_brackets(text)
return text
#Apply function on review column
data['review']=data['review'].apply(denoise_text)
# Now remove special character and apply function for the review colums
def remove_special_characters(text, remove_digits=True):
pattern=r'[^a-zA-z0-9\s]'
text=re.sub(pattern,'',text)
return text
data['review']=data['review'].apply(remove_special_characters)
# Streaming the text
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
def simple_stemmer(text):
ps=nltk.porter.PorterStemmer()
text= ' '.join([ps.stem(word) for word in text.split()])
return text
#Apply function on review column
data['review']=data['review'].apply(simple_stemmer)
data.head()
# +
# Convert positive=1 and negative=0 as numeric
def posneg(x):
if x=="negative":
return 0
elif x=="positive":
return 1
return x
filtered_score = data["sentiment"].map(posneg)
data["score"] = filtered_score
data.head()
# +
# Data Preparation for the model
from sklearn.model_selection import KFold, cross_val_score, train_test_split
import random
X = data['review'].values
y = data['sentiment'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
# -
# # b. Build a vocabulary as list.
# [‘the’ ‘I’ ‘happy’ … ]
# # You may omit rare words for example if the occurrence is less than five times
# # A reverse index as the key value might be handy
# {“the”: 0, “I”:1, “happy”:2 , … }
#
train_voca='.'.join(X_train)
test_voca='.'.join(X_test)
import nltk
from sklearn.feature_extraction.text import CountVectorizer
nltk.download('punkt')
foovec = CountVectorizer(min_df=1, tokenizer=nltk.word_tokenize)
train_counts = foovec.fit_transform(X_train)
print(train_counts)
train_counts.shape
foovec.vocabulary_
from os import listdir
from collections import Counter
# print the size of the vocab
print(len(foovec.vocabulary_))
# You may omit rare words for example if the occurrence is less than five times
# keep tokens with a min occurrence
min_occurane = 5
tokens = [k for k,c in foovec.vocabulary_.items() if c >= min_occurane]
print(tokens[1:1000])
print(len(tokens))
# # Before clearing the rare word, total number of word was 156180 and after removing it, now total number of word is 156175, which indicates that we have only 5 rare words or miss spelled word. As the number is very less, so it will not affect our analysis.
# # c. Calculate the following probability
# Probability of the occurrence
# • P[“the”] = num of documents containing ‘the’ / num of all documents
# Conditional probability based on the sentiment
#
#
words=["the"]
sentences = X_train
count=0
for sentence in sentences :
for word in words :
if word in sentence :
count=count+1
#print(count)
#print(count)
num_of_documents_containing_the=count
print(num_of_documents_containing_the)
num_of_all_documents=40000
print(num_of_all_documents)
Probability_of_the=num_of_documents_containing_the/num_of_all_documents
print(Probability_of_the)
# # • P[“the” | Positive] = # of positive documents containing “the” / num of all positive review documents
# Now take the positive sentiment data from training set
train_data=data[:4000]
positive_docs=train_data.loc[train_data['sentiment']!=0]
positive_docs.head()
# make the list of positive sentiment
train_pos_reviews=positive_docs['review']
train_pos_voca=train_pos_reviews.values.tolist()
train_pos_voca[1:5]
# Join the positive sentiment with single dot
train_pos_voca='.'.join(train_pos_voca)
# Now calculate the number of positive documents having the
words=["the"]
sentences = train_pos_voca
count=0
for sentence in sentences :
for word in words :
if word in sentence :
count=count+1
#print(count)
#print(count)
num_of_pos_documents_containing_the=count
print(num_of_pos_documents_containing_the)
# Find the totl positive documents in training data set
num_of_all_pos_documents=positive_docs['review'].count()
print(num_of_all_pos_documents)
# Now calculate P[“the” | Positive] = # of positive documents containing “the” / num of all positive review documents
probability_0f_the_in_positive_docs=num_of_pos_documents_containing_the/num_of_all_pos_documents
print(probability_0f_the_in_positive_docs)
# # d. Calculate accuracy using dev dataset
# # Conduct five fold cross validation
#
# +
# Convert the data in vector fpormate
tf_idf_vect = TfidfVectorizer(ngram_range=(1,2))
tf_idf_train = tf_idf_vect.fit_transform(X_train)
tf_idf_test = tf_idf_vect.transform(X_test)
alpha_range = list(np.arange(0,10,1))
len(alpha_range)
# +
# take different values of alpha in cross validation and finding the accuracy score
from sklearn.naive_bayes import MultinomialNB
alpha_scores=[]
for a in alpha_range:
clf = MultinomialNB(alpha=a)
scores = cross_val_score(clf, tf_idf_train, y_train, cv=5, scoring='accuracy')
alpha_scores.append(scores.mean())
print(a,scores.mean())
# +
# Plot b/w misclassification error and CV mean score.
import matplotlib.pyplot as plt
MSE = [1 - x for x in alpha_scores]
optimal_alpha_bnb = alpha_range[MSE.index(min(MSE))]
# plot misclassification error vs alpha
plt.plot(alpha_range, MSE)
plt.xlabel('hyperparameter alpha')
plt.ylabel('Misclassification Error')
plt.show()
# +
optimal_alpha_bnb
# For alpha =1, we have got minimum misscalculation error
# -
# # e. Do following experiments
# Compare the effect of Smoothing
# Derive Top 10 words that predicts positive and negative class
# • P[Positive| word]
#
# +
# Effects of non-smoothing and smoothing
# -
# # We have already got the effects of smoothing and non-smoothing. When we have considered alpha=0 (non-smoothing), we got the accuracy 82.33% whereas with smoothing our accuacy is always greater than non-smoothing conditions. We have got best smoothing parapmeter alpha=1 with hoighest accuracy 88.46%
# Now lets see the highest positive and negative words that has highest sentiment prediction capacity
import re
import string
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
nltk.download('stopwords')
# Now we will remove stop words as it does not carry significant meaning and will store positive and negative word for selections
stop = set(stopwords.words('english'))
sno = nltk.stem.SnowballStemmer('english')
def cleanhtml(sentence):
cleanr = re.compile('<.*?>')
cleantext = re.sub(cleanr, ' ', sentence)
return cleantext
def cleanpunc(sentence):
cleaned = re.sub(r'[?|!|\'|"|#]',r'',sentence)
cleaned = re.sub(r'[.|,|)|(|\|/]',r' ',cleaned)
return cleaned
i=0
str1=' '
final_string=[]
all_positive_words=[]
all_negative_words=[]
s=''
for sent in data['review'].values:
filtered_sentence=[]
sent=cleanhtml(sent)
for w in sent.split():
for cleaned_words in cleanpunc(w).split():
if((cleaned_words.isalpha()) & (len(cleaned_words)>2)):
if(cleaned_words.lower() not in stop):
s=(sno.stem(cleaned_words.lower())).encode('utf8')
filtered_sentence.append(s)
if (data['score'].values)[i] == 1:
all_positive_words.append(s)
if(data['score'].values)[i] == 0:
all_negative_words.append(s)
else:
continue
else:
continue
str1 = b" ".join(filtered_sentence)
final_string.append(str1)
i+=1
total_positive_words = len(all_positive_words)
total_negative_words = len(all_negative_words)
print(total_positive_words)
print(total_negative_words)
import random
apw = random.sample(all_positive_words, 10000)
anw = random.sample(all_negative_words, 10000)
freq_negative_words = {x:anw.count(x) for x in anw}
freq_positive_words = {x:apw.count(x) for x in apw}
# +
#Lets see positive sentiment first
# -
lst=[]
for key in freq_positive_words:
prob = freq_positive_words[key]/total_positive_words
lst.append([key,prob])
table_positive = pd.DataFrame(lst,columns=['positive_words','probability'])
table_positive = table_positive.sort_values('probability', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last')
table_positive.head(20)
from operator import itemgetter
posi={}
i=0
for key, value in sorted(freq_positive_words.items(), key = itemgetter(1), reverse = True):
if(i>10):
break
posi[key]=value
i+=1
posi
# +
plt.bar(range(len(posi)), list(posi.values()), align='center')
plt.xticks(range(len(posi)), list(posi.keys()))
print("Top 10 words that predicts positive sentiment")
plt.show()
# -
# Now lets see top 10 negative sentiment words
lst=[]
for key in freq_negative_words:
prob = freq_negative_words[key]/total_negative_words
lst.append([key,prob])
table_negative = pd.DataFrame(lst,columns=['negative_words','probability'])
table_negative = table_negative.sort_values('probability', axis=0, ascending=False, inplace=False, kind='quicksort', na_position='last')
table_negative.head(20)
nega={}
i=0
for key, value in sorted(freq_negative_words.items(), key = itemgetter(1), reverse = True):
if(i>10):
break
nega[key]=value
i+=1
nega
# +
plt.bar(range(len(nega)), list(nega.values()), align='center')
plt.xticks(range(len(nega)), list(nega.keys()))
print("Top 10 words that predicts negative sentiment")
plt.show()
# -
# # f. Using the test dataset
# Use the optimal hyperparameters you found in the step e, and use it to calculate the final accuracy.
#
# +
optimal_alpha_bnb
# For alpha =1, we have got minimum misscalculation error
# -
# Now lets see Naive bayes model
clf = MultinomialNB(alpha=1)
clf.fit(tf_idf_train,y_train)
y_pred_test = clf.predict(tf_idf_test)
from sklearn.metrics import accuracy_score
from collections import Counter
from sklearn.metrics import accuracy_score
acc = accuracy_score(y_test, y_pred_test, normalize=True) * float(100)
print('\n****Test accuracy is',(acc))
# Now lets see the confusion matrix to see the performance in visualization of classification algorithm
import seaborn as sns
from sklearn.metrics import confusion_matrix
from sklearn import metrics
cm_test = confusion_matrix(y_test,y_pred_test)
cm_test
sns.heatmap(cm_test,annot=True,fmt='d')
# Now lets see the train acuracy
y_pred_train = clf.predict(tf_idf_train)
acc = accuracy_score(y_train, y_pred_train, normalize=True) * float(100)
print('\n****Train accuracy is %d%%' % (acc))
cm_train = confusion_matrix(y_train,y_pred_train)
cm_train
sns.heatmap(cm_train,annot=True,fmt='d')
# # With best hyperparameter=1, wh have got test accuracy =88.98% and train accuracy=96% which is good. If we see the confusion matrix, then we can see clear visualization of correct predictions and some wrong predictions.
# # References
# 01. https://www.kaggle.com/lakshmi25npathi/sentiment-analysis-of-imdb-movie-reviews
# 02. https://towardsdatascience.com/sentiment-analysis-with-python-part-1-5ce197074184
# 03. https://www.dataquest.io/blog/naive-bayes-tutorial/
# 04. https://levelup.gitconnected.com/movie-review-sentiment-analysis-with-naive-bayes-machine-learning-from-scratch-part-v-7bb869391bab
# 05. https://medium.com/@krsatyam1996/imdb-movie-review-polarity-using-naive-bayes-classifier-9f92c13efa2d
#
| files/Miah 03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (tensorflow)
# language: python
# name: python3
# ---
# +
## Import Statements
from scipy.linalg import block_diag
from subprocess import call
from tqdm import tqdm
import glob
import matplotlib as mpl
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import os
import re
import tensorflow.compat.v1 as tf
import time
# %config Completer.use_jedi = False
tf.disable_v2_behavior()
# Keep TensorFlow GPU off for now
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
recalculate = False
# -
# # 3LN network (Fig. 2a,b)
# +
metadata = {}
metadata['n_n'] = 1+5*3 # number of neurons
metadata['p_n'] = 1 # number of PNs
metadata['l_n'] = 5*3 # number of LNs
metadata['fgaba_mat'] = block_diag(np.array([[0]]),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)))
np.fill_diagonal(metadata['fgaba_mat'],0)
metadata['g_gaba'] = 1.2
metadata['sim_res'] = 0.01
n_syn_fgaba = int(metadata['fgaba_mat'].sum())
n_syn_sgaba = 0
n_syn_ach = 0
# +
plt.figure(figsize=(5,5))
colors = plt.cm.inferno(np.linspace(0.2,0.8,3))
G= nx.from_numpy_matrix(metadata['fgaba_mat'][1:4,:][:,1:4],create_using=nx.DiGraph)
pos = nx.layout.circular_layout(G)
M = G.number_of_edges()
nodes = nx.draw_networkx_nodes(G, pos, node_size=2000, node_color=[colors[0],colors[1],colors[2]],nodelist=[0,1,2],node_shape='s')
edges = nx.draw_networkx_edges(G, pos, node_size=2000, arrowstyle='-|>',
arrowsize=40, width=1.5,connectionstyle='arc3, rad=0.1',edge_color='indianred')
nx.draw_networkx_labels(G,pos,{0:r"$LN_1$",1:r"$LN_2$",2:r"$LN_3$"},font_size=16,font_color='white')
ax = plt.gca()
ax.set_axis_off()
plt.savefig('Figures/LN_only_graph.svg')
plt.show()
# -
if recalculate:
np.random.seed(74932)
samplespace = [[0]+[1,0,0]*5,[0]+[0,1,0]*5,[0]+[0,0,1]*5]
v = [[0]+[0,0,0]*5]
order = np.random.choice(np.arange(3), size=9)
while np.any(np.diff(order)==0):
order = np.random.choice(np.arange(3), size=9)
for i in order:
v.append(samplespace[i])
v = np.array(v)
blocktime = 1000 # in ms
buffer = 500 # in ms
sim_res = metadata['sim_res'] # simulation resolution (in ms)
width = int(blocktime/sim_res)
tfilter_base = np.ones(width)
width_red = int(0.1*blocktime/sim_res)
tfilter = np.zeros_like(tfilter_base)
tfilter[:width_red] = 1
sim_time = len(v)*blocktime + 2*buffer # total simulation time (in ms)
t = np.arange(0,sim_time,sim_res) # duration of simulation
current_input = np.ones((metadata['n_n'],t.shape[0]-int(2*buffer/sim_res)))
for i in range(len(v)):
current_input[:,i*width:(i+1)*width]=0.0735*current_input[:,i*width:(i+1)*width]*tfilter_base
current_input[:,i*width:(i+1)*width]+= 0.15*(current_input[:,i*width:(i+1)*width].T*v[i]).T*tfilter
current_input = np.concatenate([np.zeros((current_input.shape[0],int(buffer/sim_res))),current_input,np.zeros((current_input.shape[0],int(buffer/sim_res)))],axis=1)
current_input += 0.05*current_input*np.random.normal(size=current_input.shape)+ 0.001*np.random.normal(size=current_input.shape)
state_vector = [-45]* metadata['p_n']+[-45]* metadata['l_n'] + [0.5]* (metadata['n_n'] + 4*metadata['p_n'] + 3*metadata['l_n']) + [2.4*(10**(-4))]*metadata['l_n'] + [0]*(n_syn_ach+n_syn_fgaba+2*n_syn_sgaba) + [-(sim_time+1)]*metadata['n_n']
state_vector = np.array(state_vector)
state_vector = state_vector + 0.005*state_vector*np.random.normal(size=state_vector.shape)
np.save('__simcache__/metadata.npy',metadata,allow_pickle=True)
np.save('__simcache__/state_vector',state_vector)
np.save('__simcache__/current_input',current_input)
np.save('__simcache__/time',np.array_split(t,2*(len(v)+1)))
for i in tqdm(range(2*(len(v)+1))):
call(['python','simple5x3.py',str(i)])
dataset = []
files = os.listdir('__simoutput__/')
files.sort(key=lambda var:[int(x) if x.isdigit() else x for x in re.findall(r'[^0-9]|[0-9]+', var)])
for i in files:
dataset.append(np.load(f'__simoutput__/{i}'))
dataset = np.concatenate(dataset)[:,:16]
rorder = np.random.choice(np.arange(5),replace=False,size=5)
order_rep = np.concatenate([np.arange(1,16,3,dtype=np.int64)[rorder],np.arange(2,16,3,dtype=np.int64)[rorder],np.arange(3,16,3,dtype=np.int64)[rorder]])
temp = dataset[:,order_rep]
fire = np.logical_and(temp[:-1,:]<-20,temp[1:,:]>-20)
events = []
for i in range(fire.shape[1]):
events.append(np.arange(temp.shape[0])[:-1][fire[:,i]])
events = np.array(events,dtype=object)
np.save("../data/3LN/LN_only_events.npy",events)
np.save("../data/3LN/LN_only_dataset.npy",dataset)
np.save("../data/3LN/LN_only_current.npy",current_input)
files = glob.glob('__simcache__/*')
for f in files:
os.remove(f)
files = glob.glob('__simoutput__/*')
for f in files:
os.remove(f)
else:
events = np.load("../data/3LN/LN_only_events.npy",allow_pickle=True)
dataset = np.load("../data/3LN/LN_only_dataset.npy",allow_pickle=True)
current_input = np.load("../data/3LN/LN_only_current.npy",allow_pickle=True)
# +
colors = plt.cm.inferno(np.linspace(0.2,0.8,3))
fig, ax = plt.subplots(2,1,sharex=True,figsize=(12,6))
ax[0].eventplot(events,linelengths=0.8,color=[colors[0]]*5+[colors[1]]*5+[colors[2]]*5)
ax[0].set_xlim(0,11000)
ax[0].set_ylim(-0.5,14.5)
for i in range(1,4):
ax[1].plot(0.1*(i-1)+current_input[i,:],color=colors[i-1])
ax[0].spines['top'].set_visible(False)
ax[0].spines['right'].set_visible(False)
ax[0].spines['bottom'].set_visible(False)
ax[1].spines['top'].set_visible(False)
ax[1].spines['right'].set_visible(False)
ax[0].set_yticks(np.arange(15))
ax[0].set_yticklabels(['','','LN 3','','','','','LN 2','','','','','LN 1','',''])
ax[1].set_yticks(np.arange(0,0.30,0.05))
ax[1].set_yticklabels(['0','0.05','0','0.05','0','0.05'])
ax[1].set_ylabel("Excitatory Drive (E)")
plt.tight_layout()
plt.savefig('Figures/Fig_LN_only.svg')
plt.show()
# -
# # 3PN3LN network (Fig 2c,d)
# +
metadata = {}
metadata['n_n'] = 5*3+5*3 # number of neurons
metadata['p_n'] = 5*3 # number of PNs
metadata['l_n'] = 5*3 # number of LNs
metadata['fgaba_mat'] = block_diag(np.zeros((metadata['p_n'],metadata['p_n'])),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)),np.ones((metadata['l_n']//5,metadata['l_n']//5)))
np.fill_diagonal(metadata['fgaba_mat'],0)
metadata['fgaba_mat'][:metadata['p_n'],metadata['l_n']:] = metadata['fgaba_mat'][metadata['l_n']:,metadata['l_n']:]
metadata['sgaba_mat'] = metadata['fgaba_mat']
metadata['ach_mat'] = np.zeros_like(metadata['fgaba_mat'])
metadata['ach_mat'][metadata['p_n']:,:metadata['l_n']] = np.eye(metadata['p_n'])
metadata['sim_res'] = 0.01
n_syn_fgaba = int(metadata['fgaba_mat'].sum())
n_syn_sgaba = int(metadata['fgaba_mat'].sum())
n_syn_ach = int(metadata['ach_mat'].sum())
# +
np.random.seed(48430)
colors = plt.cm.inferno(np.linspace(0.2,0.8,3))
plt.figure(figsize=(5,5))
G = nx.from_numpy_matrix((metadata['fgaba_mat'][[0,1,2,15,16,17],:][:,[0,1,2,15,16,17]]+metadata['ach_mat'][[0,1,2,15,16,17],:][:,[0,1,2,15,16,17]]).T,create_using=nx.DiGraph)
m1 = metadata['fgaba_mat'][[0,1,2,15,16,17],:][:,[0,1,2,15,16,17]].T
edges1 = []
for i in range(6):
for j in range(6):
if m1[i,j]:
edges1.append((i,j))
m2 = metadata['ach_mat'][[0,1,2,15,16,17],:][:,[0,1,2,15,16,17]].T
edges2 = []
for i in range(6):
for j in range(6):
if m2[i,j]:
edges2.append((i,j))
pos = nx.layout.fruchterman_reingold_layout(G)
M = G.number_of_edges()
0
nodes = nx.draw_networkx_nodes(G, pos, node_size=2000, node_color=[colors[0],colors[1],colors[2]],nodelist=[0,1,2],node_shape='o')
nodes = nx.draw_networkx_nodes(G, pos, node_size=2000, node_color=[colors[0],colors[1],colors[2]],nodelist=[3,4,5],node_shape='s')
edges = nx.draw_networkx_edges(G, pos, node_size=2000, arrowstyle='-|>',
arrowsize=25, width=1,connectionstyle='arc3, rad=0.1',edgelist=edges1,edge_color='indianred')
edges = nx.draw_networkx_edges(G, pos, node_size=2000, arrowstyle='-|>',
arrowsize=25, width=1,connectionstyle='arc3, rad=0.1',edgelist=edges2)
nx.draw_networkx_labels(G,pos,{0:r"$PN_1$",1:r"$PN_2$",2:r"$PN_3$",3:r"$LN_1$",4:r"$LN_2$",5:r"$LN_3$"},font_size=16,font_color='white')
ax = plt.gca()
ax.set_axis_off()
plt.savefig('Figures/LN_PN_graph.svg')
plt.show()
# -
if recalculate:
np.random.seed(8204491)
samplespace = [[0.31,0,0]*5,[0,0.31,0]*5,[0,0,0.31]*5]
v = []
order = np.random.choice(np.arange(3), size=10)
while np.any(np.diff(order)==0):
order = np.random.choice(np.arange(3), size=10)
for i in order:
v.append(samplespace[i])
v = np.array(v)
blocktime = 1000 # in ms
buffer = 500 # in ms
sim_res = metadata['sim_res'] # simulation resolution (in ms)
width = int(blocktime/sim_res)
tfilter_base = np.ones(width)
width_red = int(0.8*blocktime/sim_res)
tfilter = np.concatenate([[0,0],1-np.exp(-0.0008*np.arange(width_red//12)),0.6+0.4*np.exp(-0.0002*np.arange(7*width_red//12)),0.6*np.exp(-0.0002*np.arange(width_red//3)),np.zeros(int(blocktime/sim_res)//5)])
sim_time = len(v)*blocktime + 2*buffer # total simulation time (in ms)
t = np.arange(0,sim_time,sim_res) # duration of simulation
current_input = np.ones((metadata['n_n'],t.shape[0]-int(2*buffer/sim_res)))
for i in range(len(v)):
current_input[:metadata['p_n'],i*width:(i+1)*width] = (current_input[:metadata['p_n'],i*width:(i+1)*width].T*v[i]).T*tfilter
current_input[metadata['p_n']:,i*width:(i+1)*width] = 0.0735*current_input[metadata['p_n']:,i*width:(i+1)*width]*tfilter_base
current_input = np.concatenate([np.zeros((current_input.shape[0],int(buffer/sim_res))),current_input,np.zeros((current_input.shape[0],int(buffer/sim_res)))],axis=1)
current_input += 0.05*current_input*np.random.normal(size=current_input.shape)+ 0.001*np.random.normal(size=current_input.shape)
state_vector = [-45]* metadata['p_n']+[-45]* metadata['l_n'] + [0.5]* (metadata['n_n'] + 4*metadata['p_n'] + 3*metadata['l_n']) + [2.4*(10**(-4))]*metadata['l_n'] + [0]*(n_syn_ach+n_syn_fgaba+2*n_syn_sgaba) + [-(sim_time+1)]*metadata['n_n']
state_vector = np.array(state_vector)
state_vector = state_vector + 0.005*state_vector*np.random.normal(size=state_vector.shape)
np.save('__simcache__/metadata.npy',metadata,allow_pickle=True)
np.save('__simcache__/state_vector',state_vector)
np.save('__simcache__/current_input',current_input)
np.save('__simcache__/time',np.array_split(t,2*(len(v)+1)))
for i in tqdm(range(2*(len(v)+1))):
call(['python','simple5x3pn.py',str(i)])
dataset = []
files = os.listdir('__simoutput__/')
files.sort(key=lambda var:[int(x) if x.isdigit() else x for x in re.findall(r'[^0-9]|[0-9]+', var)])
for i in files:
dataset.append(np.load(f'__simoutput__/{i}'))
dataset = np.concatenate(dataset)[:,:30]
order_rep_LN = np.concatenate([np.arange(15,30,3,dtype=np.int64),np.arange(16,30,3,dtype=np.int64),np.arange(17,30,3,dtype=np.int64)])
temp_LN = dataset[:,order_rep_LN]
fire_LN = np.logical_and(temp_LN[:-1,:]<-20,temp_LN[1:,:]>-20)
events_LN = []
for i in range(fire_LN.shape[1]):
events_LN.append(np.arange(temp_LN.shape[0])[:-1][fire_LN[:,i]])
events_LN = np.array(events_LN,dtype=object)
order_rep_PN = np.concatenate([np.arange(0,15,3,dtype=np.int64),np.arange(1,15,3,dtype=np.int64),np.arange(2,15,3,dtype=np.int64)])
temp_PN = dataset[:,order_rep_PN]
fire_PN = np.logical_and(temp_PN[:-1,:]<0,temp_PN[1:,:]>0)
events_PN = []
for i in range(fire_PN.shape[1]):
events_PN.append(np.arange(temp_PN.shape[0])[:-1][fire_PN[:,i]])
events_PN = np.array(events_PN,dtype=object)
np.save("../data/3PN3LN/LN_PN_events_PN.npy",events_PN)
np.save("../data/3PN3LN/LN_PN_events_LN.npy",events_LN)
np.save("../data/3PN3LN/LN_PN_dataset.npy",dataset)
np.save("../data/3PN3LN/LN_PN_current.npy",current_input)
files = glob.glob('__simcache__/*')
for f in files:
os.remove(f)
files = glob.glob('__simoutput__/*')
for f in files:
os.remove(f)
else:
events_PN = np.load("../data/3PN3LN/LN_PN_events_PN.npy",allow_pickle=True)
events_LN = np.load("../data/3PN3LN/LN_PN_events_LN.npy",allow_pickle=True)
dataset = np.load("../data/3PN3LN/LN_PN_dataset.npy",allow_pickle=True)
current_input = np.load("../data/3PN3LN/LN_PN_current.npy",allow_pickle=True)
# +
colors = plt.cm.inferno(np.linspace(0.2,0.8,3))
fig, ax = plt.subplots(4,1,sharex=True,figsize=(12,7))
ax[1].eventplot(events_LN,linelengths=0.8,color=[colors[0]]*5+[colors[1]]*5+[colors[2]]*5)
ax[1].set_xlim(0,11000)
ax[1].set_ylim(-0.5,14.5)
ax[0].eventplot(events_PN,linelengths=0.8,color=[colors[0]]*5+[colors[1]]*5+[colors[2]]*5)
ax[0].set_xlim(0,11000)
ax[0].set_ylim(-0.5,14.5)
for i in range(3):
ax[2].plot(0.35*i+current_input[i,:],color=colors[i])
for i in range(15,18):
ax[3].plot(0.1*(i-15)+current_input[i,:],color=colors[i-15])
ax[0].spines['top'].set_visible(False)
ax[0].spines['right'].set_visible(False)
ax[0].spines['bottom'].set_visible(False)
ax[1].spines['top'].set_visible(False)
ax[1].spines['right'].set_visible(False)
ax[1].spines['bottom'].set_visible(False)
ax[2].spines['top'].set_visible(False)
ax[2].spines['right'].set_visible(False)
ax[2].spines['bottom'].set_visible(False)
ax[3].spines['top'].set_visible(False)
ax[3].spines['right'].set_visible(False)
ax[1].set_yticks(np.arange(15))
ax[1].set_yticklabels(['','','LN 3','','','','','LN 2','','','','','LN 1','',''])
ax[0].set_yticks(np.arange(15))
ax[0].set_yticklabels(['','','PN 3','','','','','PN 2','','','','','PN 1','',''])
ax[2].set_yticks(np.arange(0,3*0.35,0.35/2))
ax[2].set_yticklabels(['0','0.175','0','0.175','0','0.175'])
ax[2].set_ylabel("Perturbation Drive (P)")
ax[3].set_yticks(np.arange(0,0.3,0.05))
ax[3].set_yticklabels(['0','0.05','0','0.05','0','0.05'])
ax[3].set_ylabel("Excitatory Drive (E)")
plt.tight_layout()
plt.savefig('Figures/Fig_LN_PN.svg')
plt.show()
# -
# # 30LN network (Fig 2e,f,g)
# +
graphno,pertseed = 2,59428
metadata = {}
metadata['n_n'] = 1+30 # number of neurons
metadata['p_n'] = 1 # number of PNs
metadata['l_n'] = 30 # number of LNs
temp = np.load(f'../modules/networks/matrix_{graphno}.npy')
metadata['fgaba_mat'] = block_diag(np.array([[0]]),temp)
np.fill_diagonal(metadata['fgaba_mat'],0)
metadata['g_gaba'] = 1.5
metadata['sim_res'] = 0.01
n_syn_fgaba = int(metadata['fgaba_mat'].sum())
n_syn_sgaba = 0
n_syn_ach = 0
# +
np.random.seed(783385)
plt.figure(figsize=(6,6))
inv_G = nx.from_numpy_matrix(1-metadata['fgaba_mat'][1:,1:],create_using=nx.Graph)
G = nx.from_numpy_matrix(metadata['fgaba_mat'][1:,1:],create_using=nx.Graph)
pos = nx.layout.fruchterman_reingold_layout(inv_G)
M = G.number_of_edges()
nodes = nx.draw_networkx_nodes(G, pos, node_size=200, node_color=plt.cm.inferno(np.linspace(0.2,0.8,30)))
edges = nx.draw_networkx_edges(G, pos, node_size=200, arrowstyle='-|>',
arrowsize=10, width=0.5,connectionstyle='arc3, rad=0.1',edge_color='indianred')
ax = plt.gca()
ax.set_axis_off()
plt.savefig(f"Figures/LN_only_graph_{graphno}.svg")
plt.show()
# -
plt.figure(figsize=(7,7))
mpl.rcParams.update({'font.size': 22})
plt.imshow(metadata['fgaba_mat'],aspect='equal',cmap=plt.cm.inferno)
plt.clim(-0.2,1.2)
plt.xticks([0,9,19,29],[1,10,20,30])
plt.xlabel('Neuron Number')
plt.yticks([0,9,19,29],[1,10,20,30])
plt.ylabel('Neuron Number')
plt.savefig("Figures/LN_only_connectivity_2.svg")
if recalculate:
np.random.seed(pertseed)
v = [[0]*31]
elems=[1]*15+[0]*15
np.random.shuffle(elems)
v.append([0]+elems)
for i in range(4):
np.random.shuffle(elems)
v.append([0]+elems)
v = np.array(v)
blocktime = 1000 # in ms
buffer = 500 # in ms
sim_res = metadata['sim_res'] # simulation resolution (in ms)
width = int(blocktime/sim_res)
tfilter_base = np.ones(width)
width_red = int(0.1*blocktime/sim_res)
tfilter = np.zeros_like(tfilter_base)
tfilter[:width_red] = 1
sim_time = len(v)*blocktime + 2*buffer # total simulation time (in ms)
t = np.arange(0,sim_time,sim_res) # duration of simulation
current_input = np.ones((metadata['n_n'],t.shape[0]-int(2*buffer/sim_res)))
for i in range(len(v)):
current_input[:,i*width:(i+1)*width]=0.0735*current_input[:,i*width:(i+1)*width]*tfilter_base
current_input[:,i*width:(i+1)*width]+= 0.5*(current_input[:,i*width:(i+1)*width].T*v[i]).T*tfilter
current_input = np.concatenate([np.zeros((current_input.shape[0],int(buffer/sim_res))),current_input,np.zeros((current_input.shape[0],int(buffer/sim_res)))],axis=1)
current_input += 0.05*current_input*np.random.normal(size=current_input.shape)+ 0.001*np.random.normal(size=current_input.shape)
datasets = []
n_reps = 5
for x in range(n_reps):
state_vector = [-45]* metadata['p_n']+[-45]* metadata['l_n'] + [0.5]* (metadata['n_n'] + 4*metadata['p_n'] + 3*metadata['l_n']) + [2.4*(10**(-4))]*metadata['l_n'] + [0]*(n_syn_ach+n_syn_fgaba+2*n_syn_sgaba) + [-(sim_time+1)]*metadata['n_n']
state_vector = np.array(state_vector)
state_vector = state_vector + 0.005*state_vector*np.random.normal(size=state_vector.shape)
np.save(f'__simcache__/metadata_{graphno}_{pertseed}.npy',metadata,allow_pickle=True)
np.save(f'__simcache__/state_vector_{graphno}_{pertseed}',state_vector)
np.save(f'__simcache__/current_input_{graphno}_{pertseed}',current_input)
np.save(f'__simcache__/time_{graphno}_{pertseed}',np.array_split(t,4*(len(v)+1)))
for i in tqdm(range(4*(len(v)+1))):
call(['python','simple30.py',str(i),str(graphno),str(pertseed)])
dataset = []
files = os.listdir('__simoutput__/')
files.sort(key=lambda var:[int(x) if x.isdigit() else x for x in re.findall(r'[^0-9]|[0-9]+', var)])
for i in files:
dataset.append(np.load(f'__simoutput__/{i}'))
dataset = np.concatenate(dataset)[:,1:31]
datasets.append(dataset)
time.sleep(60)
events = []
for j in range(n_reps):
temp = datasets[j]
fire = np.logical_and(temp[:-1,:]<-20,temp[1:,:]>-20)
event = []
for i in range(fire.shape[1]):
event.append(np.arange(temp.shape[0])[:-1][fire[:,i]])
event = np.array(event,dtype=object)
events.append(event)
events= np.array(events,dtype=object)
np.save(f"../data/30LN/LN30_data_{graphno}_{pertseed}.npy",datasets,allow_pickle=True)
np.save(f"../data/30LN/LN30_current_{graphno}_{pertseed}.npy",current_input[:,::100],allow_pickle=True)
np.save(f"../data/30LN/LN30_events_{graphno}_{pertseed}.npy",events,allow_pickle=True)
files = glob.glob('__simcache__/*')
for f in filter(lambda v: f"{graphno}_{pertseed}" in v,files):
os.remove(f)
files = glob.glob('__simoutput__/*')
for f in filter(lambda v: f"{graphno}_{pertseed}" in v,files):
os.remove(f)
else:
datasets = np.load(f"../data/30LN/LN30_data_{graphno}_{pertseed}.npy",allow_pickle=True)
current_input = np.load(f"../data/30LN/LN30_current_{graphno}_{pertseed}.npy",allow_pickle=True)
events = np.load(f"../data/30LN/LN30_events_{graphno}_{pertseed}.npy",allow_pickle=True)
plt.figure(figsize=(12,8))
plt.eventplot(events.T.flatten(),colors=np.tile(plt.cm.inferno(np.linspace(0.2,0.8,30)),5).reshape(-1,4),linelengths=0.6)
for i in range(1500,6500,1000):
plt.fill_betweenx([0,150],[i,i],[i+100,i+100],color='lightgray')
plt.box(False)
plt.xlim(0,7000)
plt.yticks([])
plt.ylabel('LN Spike Raster')
plt.xlabel('Time (in ms)')
plt.tight_layout()
plt.savefig(f"Figures/LN_only_spiketrains_{graphno}_{pertseed}.svg")
plt.show()
plt.figure(figsize=(3,8))
for i in range(30):
plt.plot(0.14*i+current_input[i,:],color=plt.cm.inferno(0.2+0.6*(i/30)))
plt.box(False)
plt.yticks([])
plt.ylabel('Excitatory Drive (E)')
plt.xlabel('Time (in ms)')
plt.tight_layout()
plt.savefig(f"Figures/LN_only_current_{graphno}_{pertseed}.svg")
| fig2/fig2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Task 1: (Connect to the supplied petsDB, and (OPTIONAL) write a function to check if the connection is done)
# +
### Write your code below this comment
# -
# ### Task 2: (What are the different age groups in the persons database)
# +
### Write your code below this comment
# -
# ### Task 3: Which age group has maximum number of people?
# +
### Write your code below this comment
# -
# ### Task 4: How many people do not have a full name (Last name is blank/null)
# +
### Write your code below this comment
# -
# ### Task 5: How many people has more than one pet? (*)
# +
### Write your code bellow this comment
# -
# ### Task 6: How many pets have received treaments?
#
# +
### Write your code below this comment
# -
# ### Task 7: How many pets have received treatment that we know the type of?
# +
### Write your code below this comment
# -
# ### Task 8: How many pets are there from the city called "east port"
# +
### Write your code below this comment
# -
# ### Task 9: How many pets are there from the city called "east port" and who received a treatment?
# +
### Write your code below this comment
| Lesson08/Activity11/Retrieving_data_correctly_from_Database.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bayesian Linear Regression
# Linear regression is a very well known tool, but its bayesian formulation allows to obtain uncertainty estimates for the predictive distribution. This notebook is based on Chapter 3 of Bishop's Pattern Recognition and Machine Learning book.
import numpy as np
from scipy.stats import multivariate_normal
import matplotlib.pyplot as plt
# %matplotlib inline
# ### Generate sample dataset
# Generate N pairs $(x_i,y_i)$ with gaussian noise and $x_i$ sampled from uniform distribution
N = 12
sigma = 0.1
x = np.random.uniform(low=-1, high=1, size=N)
n = np.random.normal(loc=0, scale=sigma, size=N)
y = 0.3*x -0.8 +n
plt.plot(x,y, 'r.');
plt.show()
# ## Point estimate
# We are trying to design a model $\hat{y} = x w_1 + w_0 + \epsilon$ with $\epsilon \sim N(0, \sigma^2)$
# Note that this model and noise assumption result in the following likelihood function: $$p(\hat{y}|x,w) = N(xw_1+w_0, \sigma)$$
# In general we aim for the Lease Squares (LS) solution: $$\min_w \sum_i (y_i-\hat{y}_i)^2$$
# Note that the LS solution is equivalent to the Maximum Likelihood Estimator. The solution can be obtained through minimizing the loss function through Gradient Descent. However, in the case of this simple linear model it is possible to use normal equations (closed form minimization result): $$\hat{w} = (X^TX)^{-1}X^Ty$$
X = np.zeros((x.shape[0], 2))
X[:,0] = x
X[:,1] = 1
X
w = np.dot(np.dot(np.linalg.inv(np.dot(X.T,X)), X.T), y)
w
# However, this solution only provides a point estimate and lacks uncertainity information.
# ## Bayesian inference
# In turn, a bayesian approach treat $w$ as a RV which has a prior. Then, bayesian inference is used to obtain the posterior $p(w|X,Y)$ given observations
# In order to keep the solutions in closed-form, we use a Gaussian prior, allowing for a conjugate prior, for the vector $w$ $$w \sim N(w| m_0, S_0)$$
# Which then results in a Gaussian posterior
# $$p(w|X,Y) = \frac{p(Y|X,w)p(w)}{p(Y|X)} = N(w| m_N, S_N)$$ where $m_N = S_N (S_0^{-1}m_0+\frac{1}{\sigma}X^Ty)$ and $S_N^{-1} = S_0^{-1}+\frac{1}{\sigma}X^TX$
# For simplicity, let's assume $m_0 = 0$ and $S_0 = \alpha^{-1}I = 0.5I$
#prior parameters
a = 0.2
m0 = np.zeros(2)
def getPosterior(n):
#Get n points from sample dataset
x_ = X[:n]
y_ = y[:n]
#Covariance Matrix
S0I = a*np.identity(2)
SnI = S0I+ 1/sigma*np.dot(x_.T,x_)
Sn = np.linalg.inv(SnI)
#Mean
tt = np.dot(S0I, m0) + 1/sigma*np.dot(x_.T,y_)
Mn = np.dot(Sn, tt)
return multivariate_normal(mean=Mn, cov=Sn)
def plot_dist2D(dist):
x, y = np.mgrid[-1:1:.01, -1:1:.01]
pos = np.empty(x.shape + (2,))
pos[:, :, 0] = y; pos[:, :, 1] = x
plt.contourf(x, y, dist.pdf(pos))
plt.title('Posterior Distribution $p(w|X,Y)$')
plt.xlabel('w0')
plt.ylabel('w1')
# #### Posterior distribution plots
# We can plot the posterior after aggregating different number of points. Observe how the posterior distributions become narrower when more observation are aggregated
plot_dist2D(getPosterior(1))
plot_dist2D(getPosterior(4))
plot_dist2D(getPosterior(6))
plot_dist2D(getPosterior(10))
# The full posterior (when all points are incorporated) will have a peak on the mean, $w_{MAP} = m_N$, given the Gaussian distribution. In the case where the prior $p(w)$ is infinitely spread ($a \to 0$), $w_{MAP} = m_N = w_{ML} = (X^TX)^{-1}X^Ty$
# #### The predictive distribution
# Although we have estimated the posterior of parameters $w$, we are primarily interested in predicting the value of $\hat{y}$ for new sample x: $$p(\hat{y}| x, X,Y) = \int p(y|w)p(w|X,Y) dw$$
# Given the likelihood and posterior following Gaussian distributions, this predicitive distribution is also Gaussian: $$p(\hat{y}| x, X,Y) = N(\hat{y}| m_N^Tx, \sigma_N^2(x))$$ where $ \sigma_N^2(x) = \sigma^2 + x^TS_Nx $
# Note that the variance of the predictive distribution depends both on the assumed noise model ($\sigma$) and the uncertainty on the $w$ posterior
def predictive(x, nTrainingSamples):
xp = np.zeros((2,1))
xp[0,0] = x
xp[1,0] = 1
xp = np.matrix(xp)
#Get posterior given nTrainingSamples
posterior = getPosterior(nTrainingSamples)
Mn = np.matrix(posterior.mean)
Sn = np.matrix(posterior.cov)
#Predictive mean
m = np.matmul(Mn,xp)
#Predictive cov
s = sigma**2 + np.dot(xp.T, np.dot(Sn,xp))
return multivariate_normal(mean=m, cov=s)
def plot_dist1D(dist):
x = np.linspace(-4,4, 100)
y = dist.pdf(x)
plt.plot(y,x)
plt.title('Predictive Distribution $p(\hat{y}|x, X,Y)$')
plt.xlabel('pdf')
plt.ylabel('$\hat{y}$')
# #### We now observe how the predictive distributions become more certain as more training data is obtained
#New values of x where we want to predict y
x = 1.2
plot_dist1D(predictive(x, 2))
plot_dist1D(predictive(x, 6))
plot_dist1D(predictive(x, 12))
# #### We would also observe how the uncertainity changes with the values of x
plot_dist1D(predictive(1.2, 12))
plot_dist1D(predictive(2, 12))
plot_dist1D(predictive(3, 12))
plot_dist1D(predictive(6, 12))
# The predictive distribution variance grows as x increases, as expected from $\sigma_N(x)$
| static/notebooks/bayesianlinearregression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jnrtnan/Linear-Algebra-58020/blob/main/Applications_of_Linear_System.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="gUFwdxXIYDCZ"
# ## Systems of Linear Equations
# + [markdown] id="mnayQ4K0YS_E"
# ### Systems of Linear Equations can be solved with arrays and Numpy
# + colab={"base_uri": "https://localhost:8080/"} id="yZ99rp2Uaisv" outputId="ce42b9f3-98a0-42fe-f213-b29d645fa09e"
import numpy as np
A = np.array([[4,5],[3,-2]])
B = np.array([[7],[11]])
X = np.linalg.inv(A).dot(B)
print(X)
# + colab={"base_uri": "https://localhost:8080/"} id="oGbZsr4obIZ8" outputId="e25b72f8-a9a9-4b7a-aa76-40ba7e089567"
G = np.linalg.solve(A,B)
print(G)
# + colab={"base_uri": "https://localhost:8080/"} id="c51ly02JdAzH" outputId="eac2ec4f-1f7d-4370-ce2f-753f07867a30"
from scipy.linalg import solve
J = solve(A,B)
print(J)
# + id="X0Xy2IeGeE0I"
| Applications_of_Linear_System.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.1
# language: julia
# name: julia-1.6
# ---
using GenericLinearAlgebra
using LinearAlgebra
using SpecialFunctions
using Dates
using PolynomialRoots
using Printf
using FFTW
using PyPlot
using MAT
function GSLW(P, p_P, stop_criteria, R_init, R_max, if_polish = false)
#------------------------------------------------------------------------------------------------------------
# Input:
# P: Chebyshev coefficients of polynomial P, only need to provide non-zero coefficient.
# P should satisfy parity constraint and |P|^2 \le 1
# p_P: parity of P, 0 -- even, 1 -- odd
# stop_eps: algorithm will stop if it find factors that approximate polynomials with error less than
# stop_eps on Chebyshev points
# R_init: number of bits used at the beginning
# R_max: number of bits available
# if_polish: whether polish the roots, it cost more time but may improve performance
#
# Output:
# Phase factor Phi such that real part of (0,0)-component of U_\Phi(x) approximates P(x),
# where U_\Phi(x) = e^{\I \phi_0 \sigma_z} \prod_{j=1}^{\qspdeg} \left[ W(x) e^{\I \phi_j \sigma_z} \right]
#
# Besides, the algorithm will return the L^∞ error of such approximation on Chebyshev points
#
#------------------------------------------------------------------------------------------------------------
#
# Reference:
# <NAME>, <NAME>, <NAME>, and <NAME>.
# Quantum singular value transformation and beyond: exponentialimprovements for quantum matrix arithmetics.
#
# Author: X.Meng, <NAME>
# Version 1.0 .... 02/2020
#
#------------------------------------------------------------------------------------------------------------
# Step 1: Find all the roots of 1-P^2
R = R_init
while(true)
if(R>=R_max)
return Inf,[]
end
setprecision(BigFloat, R)
P = big.(P)
degree = length(P)
if(p_P==0)
coef_PQ = zeros(BigFloat,degree*2)
for j=1:degree
for k=1:degree
coef_PQ[j+k-1] -= P[j]*P[k]
end
end
coef_PQ[1] += 1
coef_PQ1 = zeros(BigFloat,degree)
for j=1:degree
coef_PQ1[j] -= P[j]
end
coef_PQ1[1] += 1
coef_PQ2 = zeros(BigFloat,degree)
for j=1:degree
coef_PQ2[j] += P[j]
end
coef_PQ2[1] += 1
Proot1 = roots(coef_PQ1, polish = if_polish, epsilon = big.(0.0))
Proot2 = roots(coef_PQ2, polish = if_polish, epsilon = big.(0.0))
Proot = [Proot1;Proot2]
else
coef_PQ = zeros(BigFloat,degree*2)
for j=1:degree
for k=1:degree
coef_PQ[j+k] -= P[j]*P[k]
end
end
coef_PQ[1] += 1
Proot = roots(coef_PQ, polish = if_polish, epsilon = big.(0.0))
end
# Step 2: Find root of 1-P^2, construct full P and Q by FFT
# recover full root list
all_root = zeros(Complex{BigFloat},length(Proot)*2)
for i=1:length(Proot)
tmpnorm = norm(Proot[i])
tmpangle = angle(Proot[i])
all_root[2*i-1] = sqrt(tmpnorm)*exp(1im*tmpangle/2)
all_root[2*i] = -sqrt(tmpnorm)*exp(1im*tmpangle/2)
end
# Construct W such that W(x)W(x)^*=1-P^2(x)
eps = 1e-16
S_0 = 0
S_1 = 0
S_2 = 0
S_3 = 0
S_4 = 0
S1_list = zeros(Complex{BigFloat},length(all_root))
S2_list = zeros(Complex{BigFloat},length(all_root))
S3_list = zeros(Complex{BigFloat},length(all_root))
S4_list = zeros(Complex{BigFloat},length(all_root))
for i=1:length(all_root)
if(abs(all_root[i])<eps)
S_0 += 1
continue
end
if(abs(imag(all_root[i]))<eps&&real(all_root[i])>0)
if(real(all_root[i])<1-eps)
S_1 += 1
S1_list[S_1] = real(all_root[i])
else
S_2 += 1
S2_list[S_2] = findmax([real(all_root[i]),1])[1]
end
continue
end
if(abs(real(all_root[i]))<eps&&imag(all_root[i])>0)
S_3 += 1
S3_list[S_3] = all_root[i]
continue
end
if(imag(all_root[i])>0&&real(all_root[i])>0)
S_4 += 1
S4_list[S_4] = all_root[i]
end
end
K = abs(P[end])
function get_w(x,use_real = true) # W(x)
x = big.(x)
Wx = K*x^(S_0/2)
eps3 = 1e-24
if(x==1) # if x==\pm 1, silghtly move x such that make \sqrt{1-x^2}>0
x -= eps3
elseif(x==-1)
x += eps3
end
for i=1:S_1
Wx *= sqrt(x^2-S1_list[i]^2)
end
for i=1:S_2
Wx *= sqrt(S2_list[i]^2-big.(1))*x+im*S2_list[i]*sqrt(big.(1)-x^2)
end
for i=1:S_3
Wx *= sqrt(abs(S3_list[i])^2+big.(1))*x+im*abs(S3_list[i])*sqrt(big.(1)-x^2)
end
for i=1:S_4
tmpre = real(S4_list[i])
tmpim = imag(S4_list[i])
tmpc = tmpre^2+tmpim^2+sqrt(2*(tmpre^2+1)*tmpim^2+(tmpre^2-1)^2+tmpim^4)
Wx *= tmpc*x^2-(tmpre^2+tmpim^2)+im*sqrt(tmpc^2-1)*x*sqrt(big.(1)-x^2)
end
if(use_real)
return real(Wx)
else
return imag(Wx)/sqrt(big.(1)-x^2)
end
end
function get_p(x) # P(x)
P_t = big.(0)
for j=1:length(P)
if(p_P==1)
P_t += P[j]*x^big.(2*j-1)
else
P_t += P[j]*x^big.(2*j-2)
end
end
return P_t
end
# Represent full P and Q under Chevyshev basis
get_wr(x) = get_w(x,true)
get_wi(x) = get_w(x,false)
DEG = 2^ceil(Int,log2(degree)+1)
coef_r = ChebyExpand(get_wr, DEG)
coef_i = ChebyExpand(get_wi, DEG)
coef_p = ChebyExpand(get_p, DEG)
if(p_P==0)
P_new = 1im.*coef_r[1:2*degree-1]+coef_p[1:2*degree-1]
Q_new = coef_i[1:2*degree-2].*1im
else
P_new = 1im.*coef_r[1:2*degree]+coef_p[1:2*degree]
Q_new = coef_i[1:2*degree-1].*1im
end
# Step 3: Get phase factors and check convergence
phi = get_factor(P_new,Q_new)
max_err = 0
t = cos.(collect(1:2:(2*degree-1))*big.(pi)/big.(4)/big.(degree))
for i=1:length(t)
targ, ret = QSPGetUnitary(phi, t[i])
P_t = big.(0)
for j=1:degree
if(p_P==1)
P_t += P[j]*t[i]^big.(2*j-1)
else
P_t += P[j]*t[i]^big.(2*j-2)
end
end
t_err = norm(real(ret[1,1])-P_t)
if(t_err>max_err)
max_err = t_err
end
end
@printf("For degree N = %d, precision R = %d, the estimated inf norm of err is %5.4e\n",length(phi)-1,R,max_err)
if(max_err<stop_criteria)
return max_err,phi
else
@printf("Error is too big, increase R.\n")
end
R = R*2
end
end
function get_factor(P,Q)
# From polynomials P, Q generate phase factors phi such that
# U_\Phi(x) = [P & i\sqrt{1-x^2}Q \\ i\sqrt{1-x^2}Q^* & P]
# phase factors are generated via a reduction procedure under Chebyshev basis
phi = zeros(BigFloat,length(P))
lenP = length(P)
for i=1:lenP-1
P, Q, phit = ReducePQ(P, Q)
phi[end+1-i] = real(phit)
end
phi[1] = angle(P[1])
return phi
end
function ReducePQ(P, Q)
# A single reduction step
P = big.(P)
Q = big.(Q)
colP = length(P)
colQ = length(Q)
degQ = colQ-1
tmp1 = zeros(Complex{BigFloat},colP+1)
tmp2 = zeros(Complex{BigFloat},colP+1)
tmp1[2:end] = big.(0.5)*P
tmp2[1:end-2] = big.(0.5)*P[2:end]
Px = tmp1 + tmp2
Px[2] = Px[2] + big.(0.5)*P[1]
if(degQ>0)
tmp1 = zeros(Complex{BigFloat},colQ+2)
tmp2 = zeros(Complex{BigFloat},colQ+2)
tmp3 = zeros(Complex{BigFloat},colQ+2)
tmp1[1:end-2] = big.(0.5)*Q
tmp2[3:end] = -big.(1)/big.(4)*Q
tmp3[1:end-4] = -big.(1)/big.(4)*Q[3:end]
Q1_x2 = tmp1 + tmp2 + tmp3
Q1_x2[2] = Q1_x2[2] - 1/big.(4)*Q[2]
Q1_x2[3] = Q1_x2[3] - 1/big.(4)*Q[1]
else
Q1_x2 = zeros(Complex{BigFloat},3)
Q1_x2[1] = big.(0.5)*Q[1]
Q1_x2[end] = -big.(0.5)*Q[1]
end
tmp1 = zeros(Complex{BigFloat},colQ+1)
tmp2 = zeros(Complex{BigFloat},colQ+1)
tmp1[2:end] = big.(0.5)*Q
tmp2[1:end-2] = big.(0.5)*Q[2:end]
Qx = tmp1 + tmp2
Qx[2] = Qx[2] + big.(0.5)*Q[1];
if(degQ>0)
ratio = P[end]/Q[end]*big.(2)
else
ratio = P[end]/Q[end]
end
phi = big.(0.5)*angle(ratio)
rexp = exp(-1im*phi)
Ptilde = rexp * (Px + ratio*Q1_x2)
Qtilde = rexp * (ratio*Qx - P)
Ptilde = Ptilde[1:degQ+1]
Qtilde = Qtilde[1:degQ]
return Ptilde,Qtilde,phi
end
function QSPGetUnitary(phase, x)
# Given phase factors Phi and x, yield U_\Phi(x)
phase = big.(phase)
Wx = zeros(Complex{BigFloat},2,2)
Wx[1,1] = x
Wx[2,2] = x
Wx[1,2] = sqrt(1-x^2)*1im
Wx[2,1] = sqrt(1-x^2)*1im
expphi = exp.(1im*phase)
ret = zeros(Complex{BigFloat},2,2)
ret[1,1] = expphi[1]
ret[2,2] = conj(expphi[1])
for k = 2:length(expphi)
temp = zeros(Complex{BigFloat},2,2)
temp[1,1] = expphi[k]
temp[2,2] = conj(expphi[k])
ret = ret * Wx * temp
end
targ = real(ret[1,1])
return targ,ret
end
function ChebyExpand(func, maxorder)
# Evaluate Chebyshev coefficients of a polynomial of degree at most maxorder
M = maxorder
theta = zeros(BigFloat,2*M)
for i=1:2*M
theta[i] = (i-1)*big.(pi)/M
end
f = func.(-cos.(theta))
c = real.(BigFloatFFT(f))
c = copy(c[1:M+1])
c[2:end-1] = c[2:end-1]*2
c[2:2:end] = -copy(c[2:2:end])
c = c./(big.(2)*big.(M))
return c
end
function Chebytonormal(coef)
#Convert Chebyshev basis to normal basis
coef = big.(coef)
coef2 = zeros(BigFloat,length(coef))
A = zeros(BigFloat,length(coef),length(coef))
b = zeros(BigFloat,length(coef))
t = cos.(collect(1:2:(2*length(coef)-1))*big.(pi)/big.(4)/big.(length(coef)))
t2 = collect(1:2:(2*length(coef)-1))*big.(pi)/big.(4)/big.(length(coef))
for i=1:length(coef)
for j=1:length(coef)
A[i,j] = t[i]^(j-1)
b[i] += coef[j]*cos((j-1)*t2[i])
end
end
coef2 = A\b
#@printf("Error is %5.4e\n",norm(A*coef2-b))
return coef2
end
function BigFloatFFT(x)
# Perform FFT on vector x
# This function only works for length(x) = 2^k
N = length(x);
xp = x[1:2:end];
xpp = x[2:2:end];
if(N>=8)
Xp = BigFloatFFT(xp);
Xpp = BigFloatFFT(xpp);
X = zeros(Complex{BigFloat},N,1);
Wn = exp.(big.(-2)im*big.(pi)*(big.(0:N/2-1))/big.(N));
tmp = Wn .* Xpp;
X = [(Xp + tmp);(Xp -tmp)];
elseif(N==2)
X = big.([1 1;1 -1])*x;
elseif(N==4)
X = big.([1 0 1 0; 0 1 0 -1im; 1 0 -1 0;0 1 0 1im]*[1 0 1 0;1 0 -1 0;0 1 0 1;0 1 0 -1])*x;
end
return X
end
# +
# Test case 1: Hamiltonian simulation
#
# Here we want to approxiamte e^{-i\tau x} by Jacobi-Anger expansion:
#
# e^{-i\tau x} = J_0(\tau)+2\sum_{k even} (-1)^{k/2}J_{k}(\tau)T_k(x)+2i\sum_{k odd} (-1)^{(k-1)/2}J_{k}(\tau) T_k(x)
#
# We truncate the series up to N = 1.4\tau+log(10^{14}), which gives an polynomial approximation of e^{-i\tau x} with
# accuracy 10^{-14}. Besides, we deal with real and imaginary part of the truncated series seperatly and divide them
# by a constant factor 2 to enhance stability.
#
# parameters
# stop_eps: desired accuracy
# tau: the duration \tau in Hamiltonian simulation
# R_init: number of bits used at the beginning
# R_max: number of bits available
stop_eps = 1e-12
tau = 100
R_init = 1024
R_max = 1025
#------------------------------------------------------------------
phi1 = []
phi2 = []
for p_P=0:1
N = ceil.(Int,tau*1.4+log(1e14))
if(p_P==0)
setprecision(BigFloat,4096)
if(mod(N,2)==1)
N -= 1
end
coef = zeros(BigFloat,N+1)
for i=1:(round(Int,N/2)+1)
coef[2*i-1] = (-1)^(i-1)*besselj(big.(2.0*(i-1)),tau)
end
coef[1] = coef[1]/2
P = Chebytonormal(coef)
P = P[1:2:end]
else
setprecision(BigFloat,4096)
if(mod(N,2)==0)
N += 1
end
coef = zeros(BigFloat,N+1)
for i=1:round(Int,(N+1)/2)
coef[2*i] = (-1)^(i-1)*besselj(big.(2*i-1),tau)
end
P = Chebytonormal(coef)[2:2:end]
end
start_time = time()
err,phi = GSLW(P,p_P,stop_eps,R_init,R_max)
elpased_time = time()-start_time
@printf("Elapsed time is %4.2e s\n", elpased_time)
end
# +
# Test case 2: Eigenstate filter
#
# Here we want to generate factors for the eigenstate filter function:
#
# f_n(x,\delta)=\frac{T_n(-1+2\frac{x^2-\delta^2}{1-\delta^2})}{T_n(-1+2\frac{-\delta^2}{1-\delta^2})}.
#
# We divide f_n by a constant factor \sqrt{2} to enhance stability.
#
# Reference: <NAME> and <NAME>
# Solving quantum linear system problem with near-optimal complexity
#
# parameters
# stop_eps: desired accuracy
# n, \delta: parameters of f_n
# R_init: number of bits used at the beginning
# R_max: number of bits available
#
stop_eps = 1e-12
n = 100
delta = 0.03
R_init = 1024
R_max = 1025
#------------------------------------------------------------------
function f_n(x,n,delta)
val = copy(x)
delta = big.(delta)
fact = chebyshev(-big.(1)-big.(2)*delta^2/(big.(1)-delta^2),n)
if(length(x)==1)
return chebyshev(-big.(1)+big.(2)*(x^2-delta^2)/(big.(1)-delta^2),n)/fact
else
for i=1:length(x)
val[i] = chebyshev(-1+2*(x[i]^2-delta^2)/(1-delta^2),n)/fact
end
return val
end
end
function chebyshev(x,n) # T_n(x)
if(abs(x)<=1)
return cos(big.(n)*acos(x))
elseif(x>1)
return cosh(big.(n)*acosh(x))
else
return big.((-1)^n)*cosh(big.(n)*acosh(-x))
end
end
# Obtain expansion of f_n under Chebyshev basis via FFT
setprecision(BigFloat,1024)
M = 2*n
theta = range(0, stop=2*pi, length=2*M+1)
theta = theta[1:2*M]
f = f_n(-cos.(theta),n,delta)
c = real(fft(f))
c = c[1:M+1]
c[2:end-1] = c[2:end-1]*2
c[2:2:end] = - c[2:2:end]
c = c / (2*M)
setprecision(BigFloat,4096)
P = Chebytonormal(c)[1:2:end]/sqrt(big.(2.0))
p_P = 0
start_time = time()
err,phi = GSLW(P,p_P,stop_eps,R_init,R_max)
elpased_time = time()-start_time
@printf("Elapsed time is %4.2e s\n", elpased_time)
# +
# Test case 3: Matrix inversion
#
# We would like to approximate 1/x over [1/kappa,1] by a polynomial, such polynomial is generated
# by Remez algorithm and the approximation error is bounded by 10^{-6}
#
# parameters
# stop_eps: desired accuracy
# kappa: parameters of polynomial approximation
# R_init: number of bits used at the beginning
# R_max: number of bits available
#
stop_eps = 1e-12
kappa = 20
R_init = 2048
R_max = 2049
#------------------------------------------------------------------
# even approximation
# enter your path here
matpath2 = "Data\\inversex\\"
vars = matread(matpath2 * "coef_xeven_" * string(kappa)*"_6"* ".mat")
coef = vars["coef"]
setprecision(BigFloat,4096)
coef2 = zeros(2*length(coef)-1)
coef2[1:2:end] = coef
P = Chebytonormal(coef2)[1:2:end]
p_P = 0
start_time = time()
err,phi1 = GSLW(P,p_P,stop_eps,R_init,R_max)
elpased_time = time()-start_time
@printf("Elapsed time is %4.2e s\n", elpased_time)
# odd approximation
vars = matread(matpath2 * "coef_xodd_" * string(kappa)*"_6"* ".mat")
coef = vars["coef"]
setprecision(BigFloat,4096)
coef2 = zeros(2*length(coef))
coef2[2:2:end] = coef
P = Chebytonormal(coef2)[2:2:end]
start_time = time()
p_P = 1
err,phi = GSLW(P,p_P,stop_eps,R_init,R_max)
elpased_time = time()-start_time
@printf("Elapsed time is %4.2e s\n", elpased_time)
# +
## Test case whatever: QSVT
stop_eps = 1e-12
tau = 100
R_init = 65536
R_max = 65537
#------------------------------------------------------------------
phi1 = []
phi2 = []
coeff = [0, 1.15635132, 0, 0, 0, 0.225911198, 0, 0, 0, 0.11899033, 0, 0, 0, 0.0760566958, 0, 0, 0, 0.0525742336, 0, 0, 0, 0.0379644735, 0, 0, 0, 0.0283926438, 0, 0, 0, 0.0221076246, 0]
# coeff = [1.15635132, 0.225911198, 0.11899033, 0.0760566958, 0.0525742336, 0.0379644735, 0.0283926438, 0.0221076246]
setprecision(BigFloat,4096)
start_time = time()
p_P = 1
c = Chebytonormal(coeff)[2:2:end]
err, phi = GSLW(c, p_P, stop_eps, R_init, R_max)
elapsed_time = time() - start_time
# for p_P=0:1
# N = ceil.(Int,tau*1.4+log(1e14))
# if(p_P==0)
# setprecision(BigFloat,4096)
# if(mod(N,2)==1)
# N -= 1
# end
# coef = zeros(BigFloat,N+1)
# for i=1:(round(Int,N/2)+1)
# coef[2*i-1] = (-1)^(i-1)*besselj(big.(2.0*(i-1)),tau)
# end
# coef[1] = coef[1]/2
# P = Chebytonormal(coef)
# P = P[1:2:end]
# else
# setprecision(BigFloat,4096)
# if(mod(N,2)==0)
# N += 1
# end
# coef = zeros(BigFloat,N+1)
# for i=1:round(Int,(N+1)/2)
# coef[2*i] = (-1)^(i-1)*besselj(big.(2*i-1),tau)
# end
# P = Chebytonormal(coef)[2:2:end]
# end
# start_time = time()
# err,phi = GSLW(P,p_P,stop_eps,R_init,R_max)
# elpased_time = time()-start_time
# @printf("Elapsed time is %4.2e s\n", elpased_time)
# end
| qsvt_search/src/QSPSolver/Solvers/GSLW_QSP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Regent
# language: regent
# name: regent
# ---
# # Session 2 Part 1
#
# So far, our circuit simulation has been running sequentially. In this session, we'll work on parallelizing the code. This will require us to do two things. First, we must describe how regions are *partitioned* (decomposed into pieces), so that Regent can determine what portions of the computation may run in parallel. Second, we'll need to update the tasks to use these newly partitioned regions.
#
# Let's start with partitioning.
#
# Partitioning is critical in Regent for two reasons. First, partitioning determines what data parallelism is available in the application (if any). Second, partitioning limits the amount of data movement required to perform a computation. Therefore, it will be important to think carefully about the access patterns (read and write sets) of each task in order to construct a good partitioning.
#
# Generally speaking, we'll start by creating an initial *independent* partition of the data (i.e. a partition which does not depend on other partitions). We can make this partition intelligent by, for example, running METIS and partitioning based on the result. But for this exercise, we'll assume that a simple *equal* partition will suffice. An example of such a partition might look like this:
#
# <table style="border: 0px;">
# <tr style="border: 0px;">
# <td style="border: 0px; padding: 10px;">
# Equal Partition of Nodes
# <img src="//regent-lang.org/images/circuit/partition1_equal.png" width="250">
# </td>
# </tr>
# </table>
#
# Now, if the application required no communication between tasks, this might be the only partition we would need. However, the circuit simulation requires communication: updates to nodes in the graph generally require access to the values stored on adjacent nodes. Conceptually, we could solve this by taking the partition above and bloating it by one node in each direction. But this could be really inefficient, because it would require the runtime to move much more data than actually required for the computation. Critically, the only data that *must* move is data for nodes connected to nodes of a different color.
#
# This requires the use of *dependent* partitions (i.e. partitions computed from other partitions). With the *preimage* operator, we can obtain a partition of edges where each edge has the color of its `in_node` field (shown on the left, below). Then we can use the *image* operator to follow the `out_node` field back to nodes (shown in the center, below). Note in particular that nodes marked with multiple colors are exactly the nodes which will be involved in communication. With a little more work, we can obtain a partition of crossing nodes (shown on the right, below).
#
# <table style="border: 0px;">
# <tr style="border: 0px;">
# <td style="border: 0px; padding: 10px;">
# Preimage (Partition of Wires)
# <img src="//regent-lang.org/images/circuit/partition2_wires.png" width="250">
# </td>
# <td style="border: 0px; padding: 10px;">
# Image (Partition of Nodes)
# <img src="//regent-lang.org/images/circuit/partition3_image.png" width="250">
# </td>
# <td style="border: 0px; padding: 10px;">
# Crossing (Partition of Nodes)
# <img src="//regent-lang.org/images/circuit/partition4_crossing.png" width="250">
# </td>
# </tr>
# </table>
#
# With this in mind, we can now compute three new partitions (which will actually be used directly in the application):
#
# <table style="border: 0px;">
# <tr style="border: 0px;">
# <td style="border: 0px; padding: 10px;">
# Private (Partition of Nodes)
# <img src="//regent-lang.org/images/circuit/partition5_private.png" width="250">
# </td>
# <td style="border: 0px; padding: 10px;">
# Shared (Partition of Nodes)
# <img src="//regent-lang.org/images/circuit/partition6_shared.png" width="250">
# </td>
# <td style="border: 0px; padding: 10px;">
# Ghost (Partition of Nodes)
# <img src="//regent-lang.org/images/circuit/partition7_ghost.png" width="250">
# </td>
# </tr>
# </table>
#
# When take all nodes of a color (e.g. red) together, you'll see that they correspond to the bloated set of nodes required for task (as we noted above). However (and this is important for performance) only the shared and ghost partitions must be communicated. The private partition is non-overlapping with the other two, and thus it can safely stay put for the duration of the simulation.
#
# Your goal is to construct the four partitions above (private, shared and ghost nodes, and wires). You may find it helpful to construct several intermediate partitions, such as the image, preimage, and crossing partition above. (Note that there are multiple valid solutions; your intermediate partitions might look different from those above.) We have given you an initial equal partition of the nodes to help you get started.
#
# ## Syntax Guide
#
partition(equal, R, ispace(int1d, N)) -- Divide R into N roughly even pieces.
partition(R.F, ispace(int1d, N)) -- Partition R according to the field R.F.
image(R, P, R2.F) -- Image over P via the field R2.F. Result is a partition of R.
preimage(R, P2, R2.F) -- Preimage of P via the field R2.F. Result is a partition of R.
P1 & P2 -- Intersection of partitions P1 and P2.
P1 | P2 -- Union of partitions P1 and P2.
P1 - P2 -- Difference of partitions P1 and P2.
# ## Exercise
# +
import "regent"
local c = regentlib.c
struct Currents {
_0 : float,
_1 : float,
_2 : float,
}
struct Voltages {
_1 : float,
_2 : float,
}
fspace Node {
capacitance : float,
leakage : float,
charge : float,
voltage : float,
}
fspace Wire(rn : region(Node)) {
in_node : ptr(Node, rn),
out_node : ptr(Node, rn),
inductance : float,
resistance : float,
capacitance : float,
current : Currents,
voltage : Voltages,
}
local CktConfig = require("session1/circuit_config")
local helper = require("session1/circuit_helper")
local validator = require("session2/circuit_partition_validator")
task toplevel()
var conf : CktConfig
conf:initialize_from_command()
conf:show()
var num_circuit_nodes = conf.num_pieces * conf.nodes_per_piece
var num_circuit_wires = conf.num_pieces * conf.wires_per_piece
var rn = region(ispace(ptr, num_circuit_nodes), Node)
var rw = region(ispace(ptr, num_circuit_wires), Wire(rn))
new(ptr(Node, rn), num_circuit_nodes)
new(ptr(Wire(rn), rw), num_circuit_wires)
c.printf("Generating a random circuit...\n")
helper.generate_random_circuit(rn, rw, conf)
-- This initial partition of nodes should be the basis of other partitions.
var colors = ispace(int1d, conf.num_pieces)
var pn_equal = partition(equal, rn, colors)
-- TODO: Compute the following partitions of nodes.
var pn_private
var pn_shared
var pn_ghost
-- TODO: Compute the partition of wires.
var pw
-- Put back this call if you want to print out the graph.
-- helper.dump_graph(conf, rn, rw)
-- Your partitions should pass this validation.
-- For each node and wire, validator checks if it belongs to a right region.
c.printf("Validating your circuit partitions...\n")
validator.validate_partitions(conf, rn, rw,
pn_private, pn_shared, pn_ghost, pw)
end
regentlib.start(toplevel)
# -
# Next up: [update the simulation tasks for the new partitioning structure](Session 2 Part 2.ipynb).
| notebooks/Session 2 Part 1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# This notebook is for generating exploratory experiments for each SSPs under baseline and mitigation policy scenarios.
# -
import sys
sys.path.append(r'C:\Users\moallemie\EMAworkbench-master')
sys.path.append(r'C:\Users\moallemie\EM_analysis')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from ema_workbench import load_results, ema_logging
# ## Load the model, uncertainities, outcomes; Run the experiments
# Open Excel input data from the notebook directory before runnign the code in multi-processing.
# This line must be at the beginning for multi processing.
if __name__ == '__main__':
ema_logging.log_to_stderr(ema_logging.INFO)
#The model must be imported as .py file in parallel processing. Make sure the Venism package file and directory is correct
# in Model_init file in the Notebook directory.
from Model_init import vensimModel
from ema_workbench import (TimeSeriesOutcome,
perform_experiments,
RealParameter,
CategoricalParameter,
Constant,
ema_logging,
save_results,
load_results)
directory = 'C:/Users/moallemie/EM_analysis/Model/Exploratory_analysis/'
# Fo Running experiments of each SSP, make sure you update the SSP name in the sheet_name line below.
df_unc = pd.read_excel(directory+'ScenarioFramework.xlsx', sheet_name='Exploratory_analysis_SSP5')
vensimModel.uncertainties = [RealParameter(row['Uncertainty'], row['Min'], row['Max']) for index, row in df_unc.iterrows()]
df_calout = pd.read_excel(directory+'ScenarioFramework.xlsx', sheet_name='Outcomes')
df_indout = pd.read_excel(directory+'ScenarioFramework.xlsx', sheet_name='SDG_indicators')
vensimModel.outcomes = [TimeSeriesOutcome(out) for out in df_calout['Outcome']] + [TimeSeriesOutcome(out) for out in df_indout['Model output indicator']]
from ema_workbench import MultiprocessingEvaluator
from ema_workbench.em_framework.evaluators import (MC, LHS, FAST, FF, PFF, SOBOL, MORRIS)
import time
start = time.time()
with MultiprocessingEvaluator(vensimModel, n_processes=80) as evaluator:
results = evaluator.perform_experiments(scenarios=1000, uncertainty_sampling=LHS)
end = time.time()
print("took {} seconds".format(end-start))
fn = 'D:/moallemie/EM_analysis/Data/SDG_SSP5_exploratory_sc1000.tar.gz'
save_results(results, fn)
experiments, outcomes = results
outcomes
| Notebook/.ipynb_checkpoints/Exploratory_SDGs_analysis-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import os
import joblib
import sklearn
import matplotlib
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
#Regressions:
from sklearn.multioutput import MultiOutputRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.tree import DecisionTreeRegressor
#Metric
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import r2_score
from pandas import DataFrame
# Show progress bar
from tqdm import tqdm
# -
df = pd.read_csv('flo_dataset_augmented.csv')
df
# +
# Input for ML models
input_col = ['in_amount_mmol', 'p_amount_mmol', 'ligand_amount_mmol',
'first_sol_amount_ml', 'second_sol_amount_ml',
'other_1_amount_mmol', 'other_2_amount_mmol', 'total_volume_ml',
'temp_c', 'time_min', 'x0_chloroindium oxalate', 'x0_indium acetate',
'x0_indium bromide', 'x0_indium chloride', 'x0_indium iodide',
'x0_indium myristate', 'x0_indium oxalate', 'x0_indium palmitate',
'x0_indium trifluoroacetate',
'x0_indium tris(N,N-diisopropylacetamidinato)',
'x1_bis(trimethylsilyl)phosphine', 'x1_phosphine gas',
'x1_phosphorus trichloride', 'x1_sodium phosphide',
'x1_tris(diethylamino)phosphine', 'x1_tris(dimethylamino)phosphine',
'x1_tris(trimethylgermyl)phosphine', 'x1_tris(trimethylsilyl)phosphine',
'x1_white phosphorus', 'x2_None', 'x2_dodecanethiol', 'x2_lauric acid',
'x2_myristic acid', 'x2_oleic acid', 'x2_palmitic acid',
'x2_stearic acid', 'x3_4-ethylpyridine', 'x3_None',
'x3_dimethylformamide', 'x3_dodecylamine', 'x3_mesitylene',
'x3_octadecene', 'x3_oleylamine', 'x3_trioctylamine',
'x3_trioctylphosphine', 'x3_trioctylphosphine oxide', 'x4_None',
'x4_dioctyl ether', 'x4_dioctylamine', 'x4_hexadecylamine',
'x4_octylamine', 'x4_oleylamine', 'x4_toluene', 'x4_trioctylphosphine',
'x4_trioctylphosphine oxide',
'x5_None', 'x5_acetic acid', 'x5_superhydride',
'x5_tetrabutylammonium myristate', 'x5_zinc acetate', 'x5_zinc bromide',
'x5_zinc chloride', 'x5_zinc iodide', 'x5_zinc octanoate',
'x5_zinc oleate', 'x5_zinc stearate', 'x5_zinc undecylenate', 'x6_None',
'x6_copper bromide', 'x6_oleic acid','x6_trioctylphosphine', 'x6_water', 'x6_zinc iodide']
output_col = ['diameter_nm', 'abs_nm', 'emission_nm']
X = df[input_col]
Y = df[output_col]
# +
# Splitting dataset for training
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.15, random_state=45, shuffle=True)
# -
Y.shape
X.shape
# +
#This is used to roughly show which regression performs better.
# Testing Regressions:
REGRESSIONS = {
"Extra trees": ExtraTreesRegressor(n_estimators=10,
max_features=32,
random_state=44),
"K-nn": KNeighborsRegressor(),
"Linear regression": LinearRegression(),
"Ridge": RidgeCV(),
"Lasso": Lasso(),
"ElasticNet": ElasticNet(random_state=0),
"RandomForestRegressor": RandomForestRegressor(max_depth=4, random_state=2),
"Decision Tree Regressor":DecisionTreeRegressor(max_depth=5),
"MultiO/P GBR" :MultiOutputRegressor(GradientBoostingRegressor(n_estimators=5)),
"MultiO/P AdaB" :MultiOutputRegressor(AdaBoostRegressor(n_estimators=5))
}
# r2 is used to evaluate the performance of all regressions.
r2_list = list()
for name, reg in REGRESSIONS.items():
reg.fit(X_train, Y_train)
Y_pred = pd.DataFrame(reg.predict(X_test))
print(name, '\n')
# This loop will show r2 for each outcome
for column in range(0, 3):
r2 = r2_score(Y_test.iloc[:, column], Y_pred.iloc[:, column])
r2_list.append(r2)
print(' R^2 for diameter is ', r2_list[0], '\n',
'R^2 for Absorbance is ', r2_list[1], '\n',
'R^2 for PL is ', r2_list[2], '\n', '\n',
)
del r2_list[:] #reset the list for the next regression
# -
# ## Optimizing
#
# ### 1. Extra Trees
# +
# This is a grid search for three parameters in the Extra Trees algorithm.
# Parameters are: random_state, n_estimators, max_features.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 25)):
for j in range(1, 25):
for k in range(2, 40, 1):
ET_regr = ExtraTreesRegressor(n_estimators=i,
max_features=j,
random_state=k)
ET_regr.fit(X_train, Y_train)
ET_Y_pred = pd.DataFrame(ET_regr.predict(X_test))
mae = mean_absolute_error(Y_test, ET_Y_pred)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
# +
ET_regr = ExtraTreesRegressor(n_estimators=12,
max_features=20,
random_state=2).fit(X_train, Y_train)
ET_regr.fit(X_train, Y_train)
ET_Y_pred = ET_regr.predict(X_test)
outputs = ('diameter: ', 'absorbance: ', 'emission: ')
for i in range(0, 3):
ET_r2 = r2_score(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i])
ET_MSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i])
ET_RMSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i], squared=False)
ET_MAE = mean_absolute_error(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i])
print(outputs[i], 'r2:', ET_r2, '; MSE:', ET_MSE, '; RMSE:', ET_RMSE, '; MAE:', ET_MAE )
# -
# ### 2. Decision Tree
# +
# This is a grid search for three parameters in the Decision Trees algorithm.
# Parameters are: max_depth, max_features, random_state.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 21)):
for j in range(1, 21):
for k in range(5, 40, 5):
DT_regr = DecisionTreeRegressor(max_depth=i,
max_features=j,
random_state=k)
DT_regr.fit(X_train, Y_train)
DT_Y_pred = pd.DataFrame(DT_regr.predict(X_test))
mae = mean_absolute_error(Y_test, DT_Y_pred)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
# +
DT_regr = DecisionTreeRegressor(max_depth=18,
max_features=1,
random_state=15)
DT_regr.fit(X_train, Y_train)
DT_Y_pred = DT_regr.predict(X_test)
outputs = ('diameter: ', 'absorbance: ', 'emission: ')
for i in range(0, 3):
DT_r2 = r2_score(Y_test.iloc[:, i], pd.DataFrame(DT_Y_pred).loc[:, i])
DT_MSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(DT_Y_pred).loc[:, i])
DT_RMSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(DT_Y_pred).loc[:, i], squared=False)
DT_MAE = mean_absolute_error(Y_test.iloc[:, i], pd.DataFrame(DT_Y_pred).loc[:, i])
print(outputs[i], 'r2:', DT_r2, '; MSE:', DT_MSE, '; RMSE:', DT_RMSE, '; MAE:', DT_MAE )
# -
# ### 3. Random Forest
# +
# This is a grid search for three parameters in the Random Forest algorithm.
# Parameters are: max_depth, n_estimators, max_features.
# Random_state is set to 45.
# This gives the best combination of the three parameters for the smallest mean squared error.
min_mae = 99999
min_i, min_j, min_k = 0, 0, 0
for i in tqdm(range(1, 27)):
for j in range(1, 27):
for k in range(2, 48, 2):
RF_regr = RandomForestRegressor(max_depth=i,
n_estimators=j,
max_features=k,
random_state=45)
RF_regr.fit(X_train, Y_train)
RF_Y_pred = pd.DataFrame(RF_regr.predict(X_test))
mae = mean_absolute_error(Y_test, RF_Y_pred)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
min_k = k
print(min_mae, min_i, min_j, min_k)
# +
RF_regr = RandomForestRegressor(max_depth=14,
n_estimators=8,
max_features=20,
random_state=45)
RF_regr.fit(X_train, Y_train)
RF_Y_pred = RF_regr.predict(X_test)
outputs = ('diameter: ', 'absorbance: ', 'emission: ')
for i in range(0, 3):
RF_r2 = r2_score(Y_test.iloc[:, i], pd.DataFrame(RF_Y_pred).loc[:, i])
RF_MSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(RF_Y_pred).loc[:, i])
RF_RMSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(RF_Y_pred).loc[:, i], squared=False)
RF_MAE = mean_absolute_error(Y_test.iloc[:, i], pd.DataFrame(RF_Y_pred).loc[:, i])
print(outputs[i], 'r2:', RF_r2, '; MSE:', RF_MSE, '; RMSE:', RF_RMSE, '; MAE:', RF_MAE )
# -
# ### 4. K Neighbors
# +
min_mae = 99999
min_i, min_j = 0, 0
for i in tqdm(range(1, 40)):
for j in range(1, 40):
KNN_reg = KNeighborsRegressor(n_neighbors=i,
p=j).fit(X_train, Y_train)
KNN_Y_pred = KNN_reg.predict(X_test)
mae = mean_absolute_error(Y_test, KNN_Y_pred)
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
print(min_mae, min_i, min_j)
# +
KNN_reg = KNeighborsRegressor(n_neighbors=2,
p=5).fit(X_train, Y_train)
KNN_Y_pred = KNN_reg.predict(X_test)
outputs = ('diameter:', 'Abs:', 'PL:')
for i in range(0, 3):
KNN_r2 = r2_score(Y_test.iloc[:, i], pd.DataFrame(KNN_Y_pred).loc[:, i])
KNN_MSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(KNN_Y_pred).loc[:, i])
KNN_RMSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(KNN_Y_pred).loc[:, i], squared=False)
KNN_MAE = mean_absolute_error(Y_test.iloc[:, i], pd.DataFrame(KNN_Y_pred).loc[:, i])
print(outputs[i], 'r2:', KNN_r2, '; MSE:', KNN_MSE, '; RMSE:', KNN_RMSE, '; MAE:', KNN_MAE)
# -
# ### 5. Lasso
# +
min_mae = 9999
min_i, min_j = 0, 0
for i in tqdm(np.arange(0.1, 2.0, 0.02)):
for j in range(1, 100):
L_reg = Lasso(alpha=i, random_state=j).fit(X_train, Y_train)
L_Y_pred = L_reg.predict(X_test)
L_r2 = r2_score(Y_test, pd.DataFrame(L_Y_pred))
if (min_mae > mae):
min_mae = mae
min_i = i
min_j = j
print(min_mae, min_i, min_j)
# -
# ### Saving Extra Trees model
# +
DT_regr = DecisionTreeRegressor(max_depth=19,
max_features=18,
random_state=20).fit(X_train, Y_train)
DT_Y_pred = DT_regr.predict(X_test)
joblib.dump(DT_regr, "./model_MO_DecisionTree.joblib")
# -
# ## Analyzing
# +
DT_regr = DecisionTreeRegressor(max_depth=19,
max_features=18,
random_state=20).fit(X_train, Y_train)
DT_regr.fit(X_train, Y_train)
DT_Y_pred = DT_regr.predict(X_test)
outputs = ('diameter: ', 'absorbance: ', 'emission: ')
for i in range(0, 3):
ET_r2 = r2_score(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i])
ET_MSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i])
ET_RMSE = mean_squared_error(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i], squared=False)
ET_MAE = mean_absolute_error(Y_test.iloc[:, i], pd.DataFrame(ET_Y_pred).loc[:, i])
print(outputs[i], 'r2:', ET_r2, '; MAE:', ET_MAE, '; MSE:', ET_MSE, '; RMSE:', ET_RMSE )
# +
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,5))
fig.suptitle('Extra Trees', fontsize=20)
ax1.plot(Y_test.iloc[:, 0], pd.DataFrame(ET_Y_pred).loc[:, 0],'o')
ax1.plot([1.5,4.5],[1.5,4.5], color = 'r')
ax1.set_title('Diameter')
ax1.set(xlabel='Observed Values (nm)', ylabel='Predicted Values (nm)')
ax2.plot(Y_test.iloc[:, 1], pd.DataFrame(ET_Y_pred).loc[:, 1],'o')
ax2.plot([400,650],[400,650], color = 'r')
ax2.set_title('Absorption')
ax2.set(xlabel='Observed Values (nm)', ylabel='Predicted Values (nm)')
ax3.plot(Y_test.iloc[:, 2], pd.DataFrame(ET_Y_pred).loc[:, 2],'o')
ax3.plot([400,750],[400,750], color = 'r')
ax3.set_title('Emission')
ax3.set(xlabel='Observed Values (nm)', ylabel='Predicted Values (nm)')
fig.tight_layout()
# +
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,5))
fig.suptitle('Extra Trees', fontsize=20)
ax1.plot(pd.DataFrame(ET_Y_pred).loc[:, 0], Y_test.iloc[:, 0], 'o')
ax1.plot([1.5,4.5],[1.5,4.5], color = 'r')
ax1.set_title('Diameter')
ax1.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)')
ax2.plot(pd.DataFrame(ET_Y_pred).loc[:, 1],Y_test.iloc[:, 1], 'o')
ax2.plot([400,650],[400,650], color = 'r')
ax2.set_title('Absorption')
ax2.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)')
ax3.plot(pd.DataFrame(ET_Y_pred).loc[:, 2], Y_test.iloc[:, 2], 'o')
ax3.plot([400,750],[400,750], color = 'r')
ax3.set_title('Emission')
ax3.set(xlabel='Predicted Values (nm)', ylabel='Observed Values (nm)')
fig.tight_layout()
# -
# +
importance_dict = dict()
for i in range(0,71):
importance_dict[input_col[i]] = ET_regr.feature_importances_[i]
sorted_importance = sorted(importance_dict.items(), key=lambda x: x[1], reverse=True)
top5 = DataFrame(sorted_importance[0:5], columns=['features', 'importance score'])
others = DataFrame(sorted_importance[5:], columns=['features', 'importance score'])
combined_others = pd.DataFrame(data = {
'features' : ['others'],
'importance score' : [others['importance score'].sum()]
})
#combining top 5 with others
imp_score = pd.concat([top5, combined_others])
sorted_importance
# +
top7 = DataFrame(sorted_importance[0:8], columns=['features', 'importance score'])
others2 = DataFrame(sorted_importance[8:], columns=['features', 'importance score'])
# combined_others2 = pd.DataFrame(data = {
# 'features' : ['others'],
# 'importance score' : [others2['importance score'].sum()]
# })
# #combining top 10 with others
# imp_score2 = pd.concat([top7, combined_others2])
import seaborn as sns
a4_dims = (20.7, 8.27)
fig, ax = plt.subplots(figsize=a4_dims)
sns.set_theme(style="whitegrid")
ax = sns.barplot(x="features", y="importance score", data=top7)
# -
from sklearn.datasets import load_boston
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
# %matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso
| More condensed dataset/notebook2/flo_test/4. Multi-output.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#hide
from timeapp.core import *
# %load_ext autoreload
# %autoreload 2
# -
# # Project name here
#
# > Summary description here.
# This file will become your README and also the index of your documentation.
# ## Install
# `pip install your_project_name`
# ## How to use
# Create a list of tasks.
#
# Hint: you can have several levels using an arbitrary delimiter. Once you export to pandas you can split using the delimiter you used in task definition.
tasks = [
'Project 1: Task1',
'Project 1: Task2',
'Project 2: Task1',
]
t = Timeapp(tasks)
t.app
# Once you have added all the time you spent on tasks you can export to pandas DataFrame and combine it with previous days for analysis.
df = t.to_df()
df
| index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Keras Tutorial
#
# <!-- include motivation -->
#
# <!-- include introduction to what neural networks are in general -->
# ## Introduction
# This tutorial will introduce you to some basic techniques and ideas you will need to start building Neural Networks in Keras. A Neural network is simply a large collection of "neural units" that are connected to each other loosely modeling the way the human brain solves problems. They are modeled after clusers of neurons connected by axons. Neural networks typically consist of several layers, each layer having several units that generally take inputs from the previous layer, perform some function, then give outputs to the next layer.
#
# Neural networks can be utilized to solve a varirety of different tasks. They have been used in the past for things like classifying data and making predictions. More recently we have heard about neural networks beating the top players in Go with AlphaGo as well as research projects out of Google where neural networks were able to "learn" their own primitive encryption scheme.
#
# Throughout this tutorial we will be exploring how to create these very powerful networks with the tool "Keras" which makes all these advanced ideas simple to implement. Hope you enjoy this tutorial!
# ## Installing the Libraries
# Before getting started, you'll need to install and important the libraries we will use throughout this tutorial. You will need to install both Theanos and Keras using `pip`:
#
# > pip install theanos
# > pip install keras
#
# Keras defaults to using TensorFlow as a back-end for its computation but I will be using Theanos for this tutorial because it is compatable with Windows.
#
# After installing these libraries, you will need to change some configuration settings in the json file `C:\Users\$USER\.keras\keras.json`. You should be able to just copy paste the following:
#
# `{
# "image_dim_ordering": "tf",
# "epsilon": 1e-07,
# "floatx": "float32",
# "backend": "theano"
# }`
#
# You may also want to install GCC speed optimization with theano. On Windows you can install "TDM GCC," making sure to enable OpenMP support during the installation (http://tdm-gcc.tdragon.net/).
#
# We will be importing Keras modules as needed throughout the tutorial.
import numpy as np
# ## Loading example data
# In order to go over the basics of Keras we will have to start by loading a dataset. Keras uses numpy arrays thoughout its implementation as inputs and outputs. We will start by loading an exceedingly simple dataset described as "perhaps the best known database to be found in pattern recognition literature." The dataset can be found here `https://archive.ics.uci.edu/ml/datasets/Iris`. The following loads the dataset in the necessary format:
# +
def classConverter(o):
if o == "Iris-versicolor":
return 0
elif o == "Iris-virginica":
return 1
else:
return 2
np.random.seed(5)
totData = np.loadtxt("iris.data", delimiter=",", converters= {4: classConverter})
#shuffle data so we get good distribution in train/test
np.random.shuffle(totData)
#take last 30 (20 %) as test
trainData = totData[:120,:]
testData = totData[120:,:]
#then have to split attributes and labels
trainX = trainData[:,:4]
trainY = np.array(map(lambda cat: np.array([0.0 if x != cat else 1.0\
for x in range(3)]), trainData[:,4]))
testX = testData[:,:4]
testY = np.array(map(lambda cat: np.array([0.0 if x != cat else 1.0\
for x in range(3)]), testData[:,4]))
print(len(trainX), 'train sequences')
print(len(testX), 'test sequences')
# -
# Since the data and the labels are stored together in the data file we had to manually split them up. Additionally, we had to shuffle the data so they weren't clustered.
#
# Another issue with the data is that the labels were categorical strings, neural networks (and Keras) are unable to handle strings as labels and so we had to split the labels into arrays with indicator variables.
#
# We can get a sense of what our data looks like here, with the left side being the attributes of the data and the right side being the categorical indicator:
for i in xrange(len(testX)):
print testX[i], testY[i]
# ## Defining a model
# The core data structure of Keras is a model, and the main type of model is a "Sequential" model. This just means that the layers of our neural network is going to be layed out in a linear stack format (other options may involve multiple inputs at different layers or layers that are shared). Using different kinds of layers and different parameters, we are able to build out a neural network like we described in the introduction. We will start by creating a simple sequntial model:
# +
from keras.models import Sequential
model = Sequential()
# -
# Now that we have a base model defined we can begin adding layers to it. Layers in Keras are just a representation of the layers in a neural network. As you may have guessed, there are many different types of layers to choose from. These include layers Keras in grouping such as core layers (dense, activation, flatten, masking,...), convolutional layers (1d convolutions, cropping, upsampling,...), normalization layers, ect. Stacking different types of layers onto our model is incredibly easy - you can just use the method `.add()` on your model.
#
# In our Iris example we will just be using the simplist and most classic kind of layer, the `Dense` layer. This is just a fully connected layer, meaning that each node is connected to every single node in the next output. We also will be giving our layers additional attributes: input_dim (for the first layer), and activation type. The activation type is just the function each node in the layer will use to give an output based on its inputs.
#
# Now we add our layers:
# +
from keras.layers import Dense
model.add(Dense(4, input_dim=4, init="normal", activation='relu'))
model.add(Dense(3, init="normal", activation='sigmoid'))
# -
# We choose to use a sigmoid activation function for the final layer so that our output values will be between 0 and 1. We need this so that we can interpret them as probabilities and pick the largest one as our predicted category. We can now configure its learning process using the method `.compile()`:
model.compile(loss="categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
# We choose a loss function of `categorical_crossentropy` A.K.A multiclass logloss, this is a good loss function to use for our binary label arrays. There are of course a plethora of other loss functions to choose from depending on your needs and fancy. We also just stick to the a very standard stochastic gradient descent optimizer, there are also an overabundance of optimizers to pick from.
# ## Using our model
# Now that we have defined our model we can use it. First we want to train our model on our training data:
model.fit(trainX, trainY, nb_epoch=100, batch_size=5)
# As we can see, Keras gives us progress bars and accuracy/loss values after each epoch. This is useful to see how the training is going.
#
# Now that we have finished training, we should try evaluating how this model does on the 30 examples we withheld from the model to test our accuracy:
perf = model.evaluate(testX, testY)
print "Test loss: ", perf[0]
print "Test accuracy %: ", perf[1]
# As we can see here the output of `.evaluate()` has two parts: the loss on the test examples, and the `score` which is the percent it got right. In this case thats 100%! Woohoo! If you're not convinced that we've predicted correctly:
idxToClass = {0: "versicolor", 1:"veriginica", 2:"setosa"}
pClasses = model.predict_classes(testX, batch_size=1)
origClasses = map(lambda x: list(x).index(1), testY)
print
for i in xrange(len(pClasses)):
print "{} predicted: {:<10} actual: {:<10}".format("T" if pClasses[i] == origClasses[i] else "F",
idxToClass[pClasses[i]],
idxToClass[origClasses[i]])
# ## Example application: IMDB Movie reviews sentiment classification
#
# <!-- Small image classification https://keras.io/datasets/ -->
# As an example to delve more into different types of Keras layers and settings we are going to go over and use the IMDB Movie reviews sentiment classification example dataset included with Keras. The original network we're adapting from can be found in the references bellow.
#
# Before we begin, we will import all of the necessary utilities we will need:
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Embedding
from keras.layers import LSTM, SimpleRNN, GRU
from keras.datasets import imdb
# Now we define some constants we will be using later and load our data into variables. We set an nb_words flag on our imdb data set to specify we only want to consider the `max_features` number of top most frequent words. We also set a seed so that we can get the same data every time.
max_features = 20000
maxlen = 80 # cut texts after this number of words (among top max_features most common words)
batch_size = 32
(X_train, y_train), (X_test, y_test) = imdb.load_data(path="imdb_full.pkl",
nb_words=max_features,
seed=388)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
# We then choose to preprocess our data using the Keras sequence preprocessing library to cut each examples text to `maxlen` number of words out of the most frequent words. This shortens the data and will allow us to train faster on the most "relevant" (frequent) data.
print('Pad sequences (samples x time)')
X_train = sequence.pad_sequences(X_train, maxlen=maxlen)
X_test = sequence.pad_sequences(X_test, maxlen=maxlen)
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
# Now we've processed our data enough we can begin building our model:
model = Sequential()
model.add(Embedding(max_features, 128, dropout=0.2))
model.add(LSTM(128, dropout_W=0.2, dropout_U=0.2)) # try using a GRU instead, for fun
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# This model uses many new layer types we have not been exposed to before - more detailed information can be found in Keras documentation pages. The Embedding layer creates a sort of map from the words in the dataset to some continuous vector space. This is a natural language processing trick that is meant to help with text learning. We then use an LSTM layer which stands for Long-Short term memory unit, which is a layer type proposed by <NAME> in 1997 - it is considered well-suited to learn to classify things when there are very long time lags of unknown size between important events. How this helps learning on this dataset is left as an exercise to the reader. As you can see, there are very complicated layers built upon amazing research that you can utilize by writing one simple line in Keras.
#
# Next we have a dense layer like we've seen before which then outputs to an Activation layer which just applies the sigmoid function which maps the ou tput to a float between 0 and 1.
#
# We also can define different loss functions and optimizers. In this case we use `binary_crossentropy` as a loss function, also known as logloss. We also use a different type of optimizer now, `adam`, which is just another method of stochastic optimization. We see again how Keras has provided such power and complexity at our fingertips - we can prototype with awesome speed.
# We then train our model:
model.fit(X_train, y_train,
batch_size=batch_size, nb_epoch=15,
validation_data=(X_test, y_test))
# and evaluate its accuracy:
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test loss:', score)
print('Test accuracy %:', acc)
# With this model we have achieved 81% accuracy on the test dataset and 97% accuracy on the training dataset. And if we look at the above output actually epoch 3 had the best test accuracy. This feels like we might be overfitting the model on the training data so we can try making some changes to the model to avoid this. Following is a proposed an alternate model to avoid overfitting:
# +
from keras.layers import GaussianNoise
model = Sequential()
model.add(Embedding(max_features, 128, dropout=0.25))
model.add(GaussianNoise(0.2))
model.add(GRU(128, dropout_W=0.25, dropout_U=0.25))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=batch_size, nb_epoch=4,
validation_data=(X_test, y_test))
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
print('Test loss:', score)
print('Test accuracy %:', acc)
# -
# We seemed to have increased our accuracy on the testing dataset to 85%! We made several changes to our model. Gaussian noise applys the training input of the layer to an additive zero-centered gaussian noise with the standard deviation as the parameter - this means we dont learn on the training data too "exactly". We also changed LSTM to GRU which is a Gated Recurrent Unit which you can read about here (https://arxiv.org/pdf/1412.3555v1.pdf), this is another type of layer that has just been shown to work well with this sort of data. And we increased dropout rates slightly to further mitigate overfitting.
#
# I'm no neural network expert but I was able to squeeze out 3% more accuracy on the test data with my changes. The takeaway here is that it is incrediblly easy to just grab and drop different layers into our network to try things out in order to try get better networks. With something as mathematically and representively powerful as neural networks with its complexity that often defies human understanding, it is very useful to have a tool that allows us to try different networks efficiently and with very little pain.
# ## Example Application: Diabetes in Pima Indians
# Code for this example adapted from reference. We now show another example on classifying patients and whether they have onset of diabetes or not based on several variables including: no. of times pregnant, tricep skin fold thickness, bmi, and others. We begin by importing our required model and layers:
# +
from keras.models import Sequential
from keras.layers import Dense
import numpy
# load dataset
dataset = numpy.loadtxt("pima-indians-diabetes.data", delimiter=",")
X = dataset[:,0:8]
Y = dataset[:,8]
# -
# We then define and compile a simple model with just 3 layers of densely connected units with different activation functions:
# +
model.add(Dense(12, input_dim=8, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# -
# We can then fit our model on the dataset, but also use an optional parameter to split our data automatically into training and validation:
model.fit(X, Y, validation_split=0.25, nb_epoch=150, batch_size=10)
# We can also apply techniques we've learned in class such as k-fold cross validation, we can use a quick little addition from sci-kit learn to help with our kfold validation called StratifiedKFold. As you can see, we can very simply add and remove additional complexity and steps on top of all of our keras models without a care in the world:
# +
from sklearn.model_selection import StratifiedKFold
kfold = StratifiedKFold(n_splits=10, shuffle=True)
cvscores = []
for train, test in kfold.split(X, Y):
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, init='uniform', activation='relu'))
model.add(Dense(8, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X[train], Y[train], nb_epoch=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X[test], Y[test], verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
cvscores.append(scores[1] * 100)
print("%.2f%% (+/- %.2f%%)" % (numpy.mean(cvscores), numpy.std(cvscores)))
# -
# As we can see, it is incredibly easy to make and modify Keras models, as well as build additional infrastructure and techniques on top of them. You can find more information about neural networks and Keras in the links below. I hope you have had as much fun and learned as much reading this tutorial as I did making it.
# ## Additional Resources
#
# - Keras tutorial videos: https://www.youtube.com/playlist?list=PLFxrZqbLojdKuK7Lm6uamegEFGW2wki6P\
# - Keras examples: https://github.com/fchollet/keras/tree/master/examples
# - Neural Network reading: https://en.wikipedia.org/wiki/Artificial_neural_network
# - Additional NN reading: http://www.cs.cmu.edu/~epxing/Class/10701-10s/Lecture/lecture7.pdf
# - Old CMU Neural Net class: https://www.cs.cmu.edu/afs/cs/academic/class/15782-f06/
# ## Summary and References
# - Keras: https://keras.io/
# - Iris Dataset: https://archive.ics.uci.edu/ml/datasets/Iris
# - IMDB Example: https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py
# - IMDB Alternate: https://github.com/fchollet/keras/blob/master/examples/imdb_cnn_lstm.py
# - Pima Indians Example: http://machinelearningmastery.com/tutorial-first-neural-network-python-keras/
# - Model Evaluation: http://machinelearningmastery.com/evaluate-performance-deep-learning-models-keras/
| 2016/tutorial_final/41/Keras Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Riskfolio-Lib Tutorial:
# <br>__[Financionerioncios](https://financioneroncios.wordpress.com)__
# <br>__[Orenji](https://www.orenj-i.net)__
# <br>__[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)__
# <br>__[<NAME>](https://www.linkedin.com/in/dany-cajas/)__
# <a href='https://ko-fi.com/B0B833SXD' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://cdn.ko-fi.com/cdn/kofi1.png?v=2' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
#
# ## Tutorial 15: Mean [Entropic Value at Risk (EVaR)](https://en.wikipedia.org/wiki/Entropic_value_at_risk) Optimization
#
# ## 1. Downloading the data:
# +
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
yf.pdr_override()
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Downloading data
data = yf.download(assets, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = assets
# +
# Calculating returns
Y = data[assets].pct_change().dropna()
display(Y.head())
# -
# ## 2. Estimating Mean EVaR Portfolios
#
# ### 2.1 Calculating the portfolio that maximizes Return/EVaR ratio.
#
# The Entropic Value at Risk which is a new risk measure introduced by Ahmadi-Javid (2012) and is defined as:
#
# $$
# \text{EVaR}_{\alpha}(X) = \inf_{z>0} \left \{z\log \left ( \frac{1}{\alpha} M_{X} (\frac{1}{z}) \right ) \right \}
# $$
#
# Where $M_{X} (t) = \text{E} [e^{tX}]$ is the moment generating function and $\alpha \in [0,1]$ is the significance level.
#
# In a similar way than Markowitz model, the mean EVaR model can be expressed as one of the following problems:
#
# $$
# \begin{aligned}
# & \min_{w,\, z} & & \mu w - \lambda \text{EVaR}_{\alpha}(r w)\\
# & & & 1^{T}w = 0 \\
# & & & w \geq 0 \\
# \end{aligned}
# $$
#
# $$
# \begin{aligned}
# & \max_{w,\, z} & & \frac{\mu w - r_{f}}{\lambda \text{EVaR}_{\alpha}(r w)}\\
# & & & 1^{T}w = 0 \\
# & & & w \geq 0 \\
# \end{aligned}
# $$
#
# $$
# \begin{aligned}
# & \min_{w,\, z} & & \text{EVaR}_{\alpha}(r w)\\
# & & & 1^{T}w = 0 \\
# & & & w \geq 0 \\
# \end{aligned}
# $$
#
# Where $z$ is the factor of EVaR, $w$ are the weights of assets, $\mu$ is the mean vector, $\lambda$ is the risk aversion factor, $r$ is the returns matrix and $r_{f}$ the risk free rate.
#
# It is recommended to use MOSEK to optimize EVaR, due to it requires exponential cone programming to solve the problem.
#
# Instructions to install MOSEK are in this __[link](https://docs.mosek.com/9.2/install/installation.html)__, is better to install using Anaconda. Also you will need a license, I recommend you that ask for an academic license __[here](https://www.mosek.com/products/academic-licenses/)__.
# +
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
port.solvers = ['MOSEK'] # It is recommended to use mosek when optimizing EVaR
port.alpha = 0.05 # Significance level for CVaR, EVaR y CDaR
model='Classic' # Could be Classic (historical), BL (Black Litterman) or FM (Factor Model)
rm = 'EVaR' # Risk measure used, this time will be EVaR
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = True # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
# -
# ### 2.2 Plotting portfolio composition
# +
import riskfolio.PlotFunctions as plf
# Plotting the composition of the portfolio
ax = plf.plot_pie(w=w, title='Sharpe Mean - Entropic Value at Risk', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
# -
# ### 2.3 Calculate efficient frontier
# +
points = 40 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# +
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu # Expected returns
cov = port.cov # Covariance matrix
returns = port.returns # Returns of the assets
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# +
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
# -
# ## 3. Estimating Risk Parity Portfolios for EVaR
#
# ### 3.1 Calculating the risk parity portfolio for EVaR.
#
# The risk parity portfolio for the EVaR risk measure is the solution of the problem:
#
# $$
# \begin{aligned}
# & \min_{w,\, z} & & \text{EVaR}_{\alpha}(r w) - b \ln(w)\\
# & & & 1^{T}w = 0 \\
# & & & w \geq 0 \\
# \end{aligned}
# $$
#
# Where $w$ are the weights of assets, $b$ is the vector of constraints, by default is a vector of 1/(number of assets).
# +
b = None # Risk contribution constraints vector
w_rp = port.rp_optimization(model=model, rm=rm, rf=rf, b=b, hist=hist)
display(w_rp.T)
# -
# ### 3.2 Plotting portfolio composition
ax = plf.plot_pie(w=w_rp, title='Risk Parity EVaR', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
# ### 3.3 Plotting Risk Composition
ax = plf.plot_risk_con(w_rp, cov=port.cov, returns=port.returns, rm=rm, rf=0, alpha=0.05,
color="tab:blue", height=6, width=10, ax=None)
| examples/Tutorial 15.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Running Model Trained Through Transfer Learning (VGG 16)
#
# ---
#
#
# ## Description
# This script will run the model trained by Transfer_Learning_Training.ipynb
# There are two functions.
# * LoadModel() : Loads model
# * predict() : Takes model and directory path for images to be predicted as arguments and classify the image. Last three lines of the method can be uncommented to move irrelevant or relevant images to different folders
#
#
# ### Usage
# Can be used as a filter damaged buildings and undamaged buildings and move them in different directories.
import os
from os import listdir
from os.path import isfile, join
from shutil import copyfile
from keras import applications
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
from keras import optimizers
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
import numpy as np
from keras import Model
img_width, img_height = 150, 150
def LoadModel(modelPath):
top_model_weights_path = modelPath #Additional layer weights
# build the VGG16 network
model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(150, 150, 3))
print('vgg16 Model loaded.')
#Additional layers for vgg16
top_model = Sequential()
top_model.add(Flatten(input_shape=model.output_shape[1:]))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
top_model.load_weights(top_model_weights_path) #Weights for new layers loaded
#join new layers to ouyput of vgg16
model = Model(input= model.input, output= top_model(model.output))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
# model.load_weights('fine_tuned_weights.h5') #load the weights for fine tuned
return model
def predict(basedir, model):
onlyfiles = [f for f in listdir(basedir) if
isfile(join(basedir, f))]
i=0
for img_name in onlyfiles:
img_path=basedir+"/"+img_name
img = load_img(img_path, False, target_size=(img_width, img_height))
x = img_to_array(img)
x = np.expand_dims(x, axis=0)
preds = model.predict(x)
pred_folder = str(int(preds[0][0]))
if preds[0][0] == 0 or preds[0][0] == 1:
print(str(i) + " prediction " + pred_folder + " probability " + str(preds[0][0]))
i += 1
try:
os.stat(pred_folder)
except:
os.mkdir(pred_folder )
# copyfile(img_path,pred_folder + '/' +img_name)
# else:
# copyfile(img_path, "unknown" + '/' + img_name)
model = LoadModel('bottleneck_50_epochs.h5')
basedir = "dataset/predictions"
predict(basedir, model)
| Version 2 (Retrained Final Layer of VGG)/Run_Transfer_Learning_Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data PreProcessing
#
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings(action='ignore')
df = pd.read_csv('insurance.csv')
df
df.describe()
plt.hist(df['charges'],bins=10,color='green')
plt.xlabel('intervals')
plt.ylabel('charges')
plt.title('Charge Distribution')
#skewed towards lower values
df.isnull().sum()
df['charges'].describe()
#iqr method for outlier remover from target variable
q3 = df['charges'].quantile(0.75)
q1 = df['charges'].quantile(0.25)
iqr = q3-q1
iqr
upper_limit = q3 + 1.5*iqr
lower_limit = q1 - 1.5*iqr
upper_limit,lower_limit
def limit_imputer(value):
if value > upper_limit:
return upper_limit
if value < lower_limit:
return lower_limit
else:
return value
df['charges'] = df['charges'].apply(limit_imputer)
df['charges'].describe()
df.corr()
#age, bmi and children have correlation with charges , affects target variable
df['smoker'].unique()
#anova
from statsmodels.formula.api import ols
import statsmodels.api as sm
mod = ols('charges ~ smoker', data = df).fit()
Anova_Table = sm.stats.anova_lm(mod,typ=2)
print(Anova_Table)
#null hypothesis is rejected and alternate hypothesis is accepted
#therefore this variable impacts target variable
df.sex.unique()
mod = ols('charges ~ sex', data = df).fit()
Anova_Table = sm.stats.anova_lm(mod,typ=2)
print(Anova_Table)
#null hypothesis accepted and alternate hypothesis rejected
#this variable has no impact on the target variable
df.region.unique()
mod = ols('charges ~ region', data = df).fit()
Anova_Table = sm.stats.anova_lm(mod,typ=2)
print(Anova_Table)
#null hypothesis accepted and alternate hypothesis rejected
#this variable has no impact on the target variable
df.drop(columns=['sex','region'],inplace=True)
df
df = pd.get_dummies(df , columns=['smoker'],drop_first=True)
plt.hist(df['bmi'],bins=200,color='green')
plt.xlabel('intervals')
plt.ylabel('BMI')
plt.title('BMI Distribution')
#skewed towards lower values
df['bmi'].head()
def log_transform(sample_data):
return np.log(sample_data)
df['bmi'] = df['bmi'].map(log_transform)
df['bmi'].head()
plt.hist(df['bmi'],bins=200,color='green')
plt.xlabel('intervals')
plt.ylabel('BMI')
plt.title('BMI Distribution')
# # Dumping PreProcessed Data
df.to_csv('raw_data.csv', index=False)
# # Scaling The DataSet
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
Y = df['charges']
X = scaler.fit_transform(df.drop(columns=['charges']))
X = pd.DataFrame(data=X,columns=df.drop(columns=['charges']).columns)
X.head()
Y.head()
# # Splitting The DataSet
from sklearn.model_selection import train_test_split
x_train , x_test , y_train , y_test = train_test_split(X,Y,test_size=0.2,random_state=42)
x_train.shape, x_test.shape , y_train.shape, y_test.shape
# # Fitting The Model
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
lr.fit(x_train,y_train)
predictions = lr.predict(x_test)
lr.score(x_test,y_test)
# # Coefficient Plot
coefficients_table = pd.DataFrame({'columns':x_train.columns,'coefficients':lr.coef_ })
coefficients_table = coefficients_table.sort_values(by='coefficients')
plt.figure(figsize=(6,8),dpi=70)
x = coefficients_table['columns']
y = coefficients_table['coefficients']
plt.barh(x,y)
plt.xlabel('Coefficients')
plt.ylabel('variables')
plt.title('Normalised coefficient plot')
plt.show()
# # Model Dumping
from joblib import dump,load
dump(lr,'insurance.joblib')
| insurance_model_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.ticker as mtick
beta = 0.5
gamma = 0.25
x0 = np.array([0.95, 0.05, 0])
def f(x):
assert x.shape == (3,)
dS = -beta * x[0] * x[1]
dR = gamma * x[1]
dI = -dS - dR
return np.array([dS,dI,dR])
def approx(N):
x = x0.copy()
h = 40.0/N
out = np.empty(shape=(N+1,3))
for i in range(N):
out[i] = x
x += h * f(x)
out[N] = x
return out
# -
#4a
print ((approx(4000)[-1,0] - approx(8000)[-1,0])/((approx(8000)[-1,0]) - approx(16000)[-1,0]))
def p4bc():
N = 8000
arr = approx(N)
ts = np.linspace(0,40,N+1)
plt.figure()
plt.plot(ts, arr[:,0], '-', label='S(t)')
plt.plot(ts, arr[:,1], '--', label='I(t)')
plt.plot(ts, arr[:,2], ':', label='R(t)')
plt.legend()
plt.ylabel("Population")
plt.xlabel("t")
plt.gca().get_yaxis().set_major_formatter(mtick.PercentFormatter(1))
plt.show()
#4b
global gamma, beta
beta = 1
gamma = 0.5
p4bc()
#4c
global gamma, beta
beta = 1
gamma = 0.8
p4bc()
# +
#4d
# Gamma = 0.8 leads to S(N) being just over half
| 514/hwk8/p4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
import numpy as np
from matplotlib import pyplot as plt
import scipy.stats
# + [markdown] pycharm={"name": "#%% md\n"}
#
# ## Problem 1
# + pycharm={"name": "#%%\n"}
p_grid = np.linspace(0, 1, 20)
prior = np.ones_like(p_grid)
likelihood = scipy.stats.binom.pmf(4, 4+11, p=p_grid)
posterior = likelihood*prior
posterior=posterior/sum(posterior)
plt.plot(p_grid,posterior);
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Problem 2
# + pycharm={"name": "#%%\n"}
p_grid = np.linspace(0, 1, 20)
prior = np.concatenate([np.zeros(p_grid.size/2),
np.ones(p_grid.size/2)])
likelihood = scipy.stats.binom.pmf(4, 4+11, p=p_grid)
posterior = likelihood*prior
posterior=posterior/sum(posterior)
plt.plot(p_grid,posterior);
# + pycharm={"name": "#%%\n"}
| docs/bayes_window_book/rethinking/week01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1A.data - visualisation des données - correction
#
# Correction.
# %matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
# ## Exercice 1 : écart entre les mariés
#
# On reprend d'abord le code qui permet de récupérer les données.
# +
from urllib.error import URLError
import pyensae
from pyensae.datasource import dBase2df, DownloadDataException
files = ["etatcivil2012_nais2012_dbase.zip",
"etatcivil2012_dec2012_dbase.zip",
"etatcivil2012_mar2012_dbase.zip" ]
try:
pyensae.download_data(files[-1],
website='http://telechargement.insee.fr/fichiersdetail/etatcivil2012/dbase/')
except (DownloadDataException, URLError, TimeoutError):
# backup plan
pyensae.download_data(files[-1], website="xd")
df = dBase2df("mar2012.dbf")
print(df.shape, df.columns)
df.head()
# -
# Puis on effectue les opérations suggérées par l'énoncé.
df["ANAISH"] = df.apply (lambda r: int(r["ANAISH"]), axis=1)
df["ANAISF"] = df.apply (lambda r: int(r["ANAISF"]), axis=1)
df["differenceHF"] = df.ANAISH - df.ANAISF
df["nb"] = 1
dist = df[["nb","differenceHF"]].groupby("differenceHF", as_index=False).count()
import pandas
pandas.concat([dist.head(n=2), dist.tail(n=3)])
# ## Exercice 2 : graphe de la distribution avec pandas
#
# L'exemple est suggéré par le paragraphe : [bar plots](http://pandas.pydata.org/pandas-docs/stable/visualization.html#bar-plots).
ax = dist.plot(kind="bar", y="nb", x="differenceHF", figsize=(16,6), color="red")
ax.set_title("Distribution des écarts d'âge entre mari et femme");
# Mais on pouvait directement dessiner la distribution sans passer par un ``group by`` comme suggérée par le paragraphe [histograms](http://pandas.pydata.org/pandas-docs/stable/visualization.html#histograms).
ax = df["differenceHF"].hist(figsize=(16,6), bins=50)
ax.set_title("Distribution des écarts d'âge entre mari et femme");
# Ou encore la distribution lissée (voir [density plot](http://pandas.pydata.org/pandas-docs/stable/visualization.html#density-plot)) (cela prend une minute environ) :
ax = df["differenceHF"].plot(figsize=(16,6), kind="kde")
ax.set_title("Distribution des écarts d'âge entre mari et femme");
# Le second graphique peut être obtenu en écrivant :
df["ageH"] = -df.ANAISH + 2012
df["ageF"] = -df.ANAISF + 2012
df.plot(x="ageH", y="ageF", kind="scatter")
ax.set_title("Nuage de points - âge mari et femme");
# Il y a trop de points pour que cela soit lisible. C'est pourquoi, on utilise souvent une [heatmap](http://pandas.pydata.org/pandas-docs/stable/visualization.html#hexagonal-bin-plot).
df.plot(kind='hexbin', x="ageH", y="ageF", gridsize=25, figsize=(7,6))
ax.set_title("Heatmap - âge entre mari et femmes");
# ## Exercice 3 : distribution des mariages par jour
#
# On veut obtenir un graphe qui contient l'histogramme de la distribution du nombre de mariages par jour de la semaine et d'ajouter une seconde courbe correspond avec un second axe à la répartition cumulée.
#
# https://github.com/pydata/pandas/issues/11111
# +
# ce code échoue pour pandas 0.17.rc1, prendre 0.16.2 ou 0.17.rc2
df["nb"] = 1
dissem = df[["JSEMAINE","nb"]].groupby("JSEMAINE",as_index=False).sum()
total = dissem["nb"].sum()
repsem = dissem.cumsum()
repsem["nb"] /= total
ax = dissem["nb"].plot(kind="bar", color="red")
repsem["nb"].plot(ax=ax, secondary_y=True)
ax.set_title("distribution des mariages par jour de la semaine");
# -
# ## Exercice 4 : dessin d'un graphe avec networkx
#
# On construit un graphe aléatoire, ses 20 arcs sont obtenus en tirant 20 fois deux nombres entiers entre 1 et 10. Chaque arc doit avoir une épaisseur aléatoire.
# +
import random
import networkx as nx
G=nx.Graph()
edge_width = [ ]
for i in range(20) :
G.add_edge ( random.randint(0,10), random.randint(0,10) )
edge_width.append( random.randint( 1,5) )
import matplotlib.pyplot as plt
f, ax = plt.subplots(figsize=(8,4))
pos=nx.spring_layout(G)
nx.draw_networkx_nodes(G,pos)
nx.draw_networkx_edges(G,pos,width=edge_width,ax=ax);
# -
| _doc/notebooks/td1a_dfnp/td1a_correction_session_12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rcrowe-google/schemacomponent/blob/Nirzari%2Ffeature%2Fexample/example/taxi_example_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="tjdL-1KzpQsN"
# # Chicago taxi example using TFX schema curation custom component
#
# This example demonstrate the use of schema curation custom component. User defined function `schema_fn` defined in `module_file.py` is used to change schema feature `tips` from required to optional using schema curation component.
#
# base code taken from: https://github.com/tensorflow/tfx/blob/master/docs/tutorials/tfx/components_keras.ipynb
# + [markdown] id="tMWVJKcLQ6c0"
# ## Setup
# + [markdown] id="MZOYTt1RW4TK"
# ### Install TFX
#
# **Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).**
# + id="S4SQA7Q5nej3" colab={"base_uri": "https://localhost:8080/"} outputId="9c250f97-a00f-4d31-d0b7-2c312a8b9a61"
# !pip install -U tfx
# + id="VBcF0mYLOe6w" colab={"base_uri": "https://localhost:8080/"} outputId="cfaeffd5-7f73-4304-b413-ae0fcf33f1dc"
# x = !pwd
if 'schemacomponent' not in str(x):
# !git clone https://github.com/rcrowe-google/schemacomponent
# %cd schemacomponent/example
# + [markdown] id="0OUhlid3RCV1"
# ## Chicago taxi example pipeline
#
# + id="YIqpWK9efviJ" colab={"base_uri": "https://localhost:8080/"} outputId="08db2bd2-e34b-4d8b-a1a1-f3a0bbba0cfc"
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
# %load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
# + id="bXGK30MDQd9m"
from schemacomponent.component import component
# + [markdown] id="ufJKQ6OvkJlY"
# ### Set up pipeline paths
# + id="ad5JLpKbf6sN"
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# + [markdown] id="n2cMMAbSkGfX"
# ### Download example data
# We download the example dataset for use in our TFX pipeline.
#
# The dataset we're using is the [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew) released by the City of Chicago. The columns in this dataset are:
#
# <table>
# <tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
# <tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
# <tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
# <tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
# <tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
# <tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
# </table>
#
# With this dataset, we will build a model that predicts the `tips` of a trip.
# + id="BywX6OUEhAqn" colab={"base_uri": "https://localhost:8080/"} outputId="ec214f25-e5eb-4e77-bf91-1c65675c8a3f"
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
# + [markdown] id="HdQWxfsVkzdJ"
# ## Run TFX components
# In the cells that follow, we create TFX components one-by-one and generates `schema` using `schemaGen` component.
# + id="0Rh6K5sUf9dd" colab={"base_uri": "https://localhost:8080/", "height": 0} outputId="560dbd32-57c2-4b63-d39d-d5f0a0c020a2"
context = InteractiveContext()
#create and run exampleGen component
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen)
#create and run statisticsGen component
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
#create and run schemaGen component
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
# + [markdown] id="XmCvzk0Lrycj"
# ## Schema curation custom component
#
# Using schema curation component `tips` is changed into `optional` feature
#
# Code for modifying schema is in user supplied `schema_fn` in `module_file.py`
#
# + [markdown] id="5P6vQsBnsswu"
# ### Display infered schema
#
# In the infered schema, `tips` feature is shown as a `required` feature:
#
#
# tips | FLOAT | required | single
#
#
#
# + id="VkeNRpv9t-gq" outputId="d54fc6df-d907-400c-91e1-aaa64dfd2fa3" colab={"base_uri": "https://localhost:8080/", "height": 980}
#display infered schema
context.show(schema_gen.outputs['schema'])
# + [markdown] id="tgdyxwyNwCxo"
# ### Modifying schema
# + colab={"base_uri": "https://localhost:8080/", "height": 186} id="1N4dUbwcHuXj" outputId="4eb718a9-cc2a-48f4-ef90-859e14472b40"
#schema curation component
schema_curation = component.SchemaCuration(schema=schema_gen.outputs['schema'],
module_file='module_file.py')
context.run(schema_curation)
# + [markdown] id="lE0MG8O8we18"
# ### Display modified schema
#
# feature `tips` is now `optional` in the modified schema
# + id="cfnuieU2wpr0" outputId="2fe236e9-9dc0-4fd0-f756-7614c5ec267f" colab={"base_uri": "https://localhost:8080/", "height": 980}
context.show(schema_curation.outputs['custom_schema'])
| tfx_addons/schema_curation/example/taxi_example_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import geopandas as gpd
import matplotlib.pyplot as plt
import networkx as nx
import osmnx as ox
from descartes import PolygonPatch
from shapely.geometry import LineString
from shapely.geometry import Point
from shapely.geometry import Polygon
import json
# %matplotlib inline
ox.config(log_console=True)
ox.__version__
# -
# ## Objetos Propios
# + [markdown] tags=[]
# ### Isochrone Plots
# + jupyter={"source_hidden": true} tags=[]
from networkx.classes.multidigraph import MultiDiGraph
class IsochronePlots:
def __init__(self, trip_times: list):
self.trip_times = trip_times
def colors(self) -> list:
self.colors = ox.plot.get_colors(
n=len(self.trip_times),
cmap="plasma",
start=0,
return_hex=True)
return self.colors
def plot_node_isochrones(self,G: MultiDiGraph ,center_node: int) -> None:
node_colors = {}
for trip_time, color in zip(sorted(self.trip_times, reverse=True), self.colors):
subgraph = nx.ego_graph(G, center_node, radius=trip_time, distance="time")
for node in subgraph.nodes():
node_colors[node] = color
node_colors = [node_colors[node] if node in node_colors else "none" for node in G.nodes()]
node_size = [15 if node in node_colors else 0 for node in G.nodes()]
return node_colors,node_size
# fig, ax = ox.plot_graph(
# G,
# node_color=nc,
# node_size=ns,
# node_alpha=0.8,
# edge_linewidth=0.2,
# edge_color="#999999",
# )
# -
# ## GraphBuilder
# + jupyter={"source_hidden": true} tags=[]
class GraphBuilder:
def __init__(self, place, transportation_mode, travel_speed):
self.place = place
self.transportation_mode = transportation_mode
self.travel_speed = travel_speed # in km/h
self.graph = None
def initialize_graph(self):
self.graph = ox.graph_from_place(
self.place,
network_type=self.transportation_mode
)
return self.graph
def calculate_centroid(self):
gdf_nodes = ox.graph_to_gdfs(self.graph, edges=False)
x, y = gdf_nodes["geometry"].unary_union.centroid.xy
center_node = ox.distance.nearest_nodes(self.graph, x[0], y[0])
return center_node
def initialize_projected_graph(self):
self.projected_graph = ox.project_graph(self.graph)
meters_per_minute = self.travel_speed * 1000 / 60 # km per hour to m per minute
for _, _, _, data in self.projected_graph.edges(data=True, keys=True):
data["time"] = data["length"] / meters_per_minute
return self.projected_graph
# -
# ### IsochroneJsonDump
# + jupyter={"source_hidden": true} tags=[]
class IsochroneJsonDump:
def __init__(self, node_colors, node_size):
self.node_colors = node_colors
self.node_size = node_size
def serialize(self):
return {"node_colors":self.node_colors,"node_size":self.node_size}
# +
place = "Buenos Aires, Argentina"
network_type = "walk"
trip_times = [5, 10, 15, 30, 45, 60, 90] # in minutes
travel_speed = 4.5 # walking speed in km/hour
graph_builder = GraphBuilder(place, network_type, travel_speed)
graph_builder.initialize_graph()
# -
G = graph_builder.graph
# +
center_node = graph_builder.calculate_centroid()
projected_G = graph_builder.initialize_projected_graph()
# get one color for each isochrone
isochrone_plots = IsochronePlots(trip_times)
iso_colors = isochrone_plots.colors()
# + tags=[]
print(G.size())
print(len(G.nodes))
print(len(G.edges))
with open("jsonDumpIsochrones.json", "w") as archivo:
isochronesForAllNodes = {}
for i in list(G.nodes)[:3]:
node_colors, node_size = isochrone_plots.plot_node_isochrones(projected_G,i)
isochrones = IsochroneJsonDump(node_colors,node_size)
isochronesForAllNodes[i] = isochrones.serialize()
archivo.write(json.dumps(isochronesForAllNodes))
# + tags=[]
#ox.distance.get_nearest_node() - Caro, ari me dijo que podiamos usar esto con las coordenadas de los parques
#para tomar de referencia ese nodo como el nodo del parque
| Interseccion_Isocronas.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 9.5
# language: sage
# name: sagemath-9.5
# ---
# +
import numpy as np
import math
import matplotlib as plt
import networkx as nx
# %matplotlib inline
# Ring of polynomials for the weights of the base
R.<k> = PolynomialRing(QQ)
# Ring of polynomials in a whose coefficients are rational functions in k
S.<a> = PolynomialRing(FractionField(R))
def sturm(p):
"""
Computes the Sturm polynomials.
Args:
p: a polynomial in a with rational coefficients in k.
Returns:
the Sturm polynomials of p.
"""
S = []
if p == 0:
return S
S.append(p)
S.append(diff(p, a))
assert S[-2].gcd(S[-1]) == 1, S[-2].gcd(S[-1])
while True:
next_p = -S[-2].mod(S[-1])
if next_p == 0:
break
S.append(next_p)
return S
# Maybe there's a better way, but I couldn't find it
x = var('x')
ge_op = (x>=0).operator()
gt_op = (x>0).operator()
lt_op = (x<0).operator()
def changes(S, *points, power=1):
"""
Computes sign changes in the Sturm polynomials in a at given points.
Args:
S: the Sturm polynomials of a polynomial in a.
points: the points at which sign changes must be computed.
power: replace k with x^power; by passing points expressed in
x, rather than in k, one can effectively compute sign
changes at points containing roots of k whose order is a
divisor of power.
Returns:
a dictionary mapping each point to the number of sign changes and the first valid value k for such changes.
"""
changes={}
for point in points:
prev_op = None
c = 0
m = 0
for i in range(len(S)):
sol = solve(S[i](a=point)(k=x^power)>0,x)
op = '+'
if len(sol) > 0:
if isinstance(sol[-1], list):
ineq = sol[-1][-1]
assert ineq.lhs() == x
if ineq.operator() != ge_op and ineq.operator() != gt_op:
op = '-'
m = max(m, ineq.rhs())
else:
# Here we expect x < +Infinity
assert sol[-1].operator() == lt_op and sol[-1].lhs() == x and sol[-1].rhs() == +Infinity
else:
# Everywhere negative
op = '-'
if i > 0 and op != prev_op:
c += 1
prev_op = op
# Now we try to improve m manually
for r in range(math.ceil(m) - 1, 0, -1):
prev_op = '0'
d = 0
for i in range(len(S)):
if S[i].denominator()(a=point)(k=r) == 0:
d = -1
break
v = S[i](a=point)(k=r)
if v == 0:
op = '0'
elif v > 0:
op = '+'
else:
op = '-'
if op != prev_op and op != '0' and prev_op != '0':
d += 1
prev_op = op
if d == c:
m = m -1
else:
break
changes[point] = [c, m^power]
return changes
def printChanges(changes, l, r):
"""
Prints (in an easy-to-read way) the number of changes of a function in a given interval (l..r].
Args:
changes: the result of changes(S,l,r).
l: the left estreme passed to changes().
r: the right extreme passed to changes().
"""
bound = max(changes[l][1], changes[r][1])
print("Number of zeros for " , l, " < ⍺ ≤ ", r, ": ", changes[l][0] - changes[r][0], " for k ", ("> " if math.ceil(bound) == bound else "≥ " ), math.ceil(bound), sep = "")
def printLimits(*points):
"""
Prints (in an easy-to-read way) the limits at infinity of the arguments.
"""
for point in points:
print(point, "→", limit(point, k=infinity), "for k → ∞")
def lower_bound(solve_res):
"""
Returns the very last inequality of a solve() result, given that is a lower bound
(an assertion will fail otherwise), rounding to the next integer.
Args:
solve_res: the result of solve().
Returns:
a string given by "> C" or "≥ C", where C is the lower bound.
"""
t = solve_res[-1][-1]
operator = t.operator()
operands = t.operands()
assert operator == gt_op or operator == ge_op
bound = math.ceil(operands[1])
if operator == gt_op and bound == operands[1]:
return "≥ " + str(bound + 1)
else:
return "≥ " + str(bound)
# -
# # Counterexample
# +
# Rank monotonicity counterexample
# Node names correspond to the labeling in the paper
A_pre = matrix(7, 7, [
0, 0, 0, 1, 1, 0, 0,
0, 0, 1, 0, 0, 1, 0,
0, 1, 0, 1, 0, 0, 1,
1, 0, 1, 0, 0, 0, 0,
k, 0, 0, 0, k-1, 0, 0,
0, (k-1)*(k-2), 0, 0, 0, 0, 0,
0, 0, k, 0, 0, 0, k-1
])
A_post = matrix(7, 7, [
0, 1, 0, 1, 1, 0, 0,
1, 0, 1, 0, 0, 1, 0,
0, 1, 0, 1, 0, 0, 1,
1, 0, 1, 0, 0, 0, 0,
k, 0, 0, 0, k-1, 0, 0,
0, (k-1)*(k-2), 0, 0, 0, 0, 0,
0, 0, k, 0, 0, 0, k-1
])
# Column sums of the adjugate (AKA Katz's index multiplied by the determinant)
r_pre = vector([1]*7) * (~(identity_matrix(7) - a * A_pre) * det(identity_matrix(7) - a * A_pre))
r_post = vector([1]*7) * (~(identity_matrix(7) - a * A_post) * det(identity_matrix(7) - a * A_post))
# -
# ## Dominant Eigenvalue Bounds
# The dominant eigenvalue of both matrices is sandwitched between $\mathit{lower}=k+\frac1{k^2}$ and $\mathit{upper} = k + \frac3{4k}$.
# +
lower, upper = k + 1/(k*k), k+3/(4*k)
print("Pre:")
charpoly = det(A_pre - a * identity_matrix(7))
St = sturm(charpoly)
l, r = lower, upper
result = changes(St, l, r)
printChanges(result, l, r)
l, r = upper, 2*k
result = changes(St, l, r)
printChanges(result, l, r)
print()
print("Post:")
charpoly = det(A_post - a * identity_matrix(7))
St = sturm(charpoly)
l, r = lower, upper
result = changes(St, l, r)
printChanges(result, l, r)
l, r = upper, 2*k
result = changes(St, l, r)
printChanges(result, l, r)
# Determinants are positive
det_pre = det(identity_matrix(7) - a * A_pre)
det_post = det(identity_matrix(7) - a * A_post)
unused = lower_bound(solve(det_pre(a=1/upper)(k=x)>0, x))
unused = lower_bound(solve(det_post(a=1/upper)(k=x)>0, x))
# -
# ## Score Comparison
# Node 1 is more important than node 0 when $\alpha = \frac1{\mathit{upper}}$ (top violation).
# +
p = S(r_pre[1] - r_pre[0])
print("Node 0 is more important than node 1 when ⍺ = 2/3 for k", lower_bound(solve(p(a=1/upper)(k=x)>0, x)))
# -
# It is also more important when $\frac1{\mathit{upper}}<\alpha\leq \frac1{\mathit{lower}}$, which is an interval including $\frac1\rho$.
# +
St = sturm(p)
l, r = 1/upper, 1/lower
printChanges(changes(St, l, r, power=2), l, r)
# -
# ## Rank Counterexample
# ### PRE
# Note 1 is more important than node 4 when $\alpha = \frac1{\mathit{upper}}$.
# +
p = S(r_pre[1] - r_pre[4])
print("Node 1 is more important than node 4 when ⍺ = 1/upper for k", lower_bound(solve(p(a=1/upper)(k=x)>0,x)))
# -
# It is also more important when $\frac1{\mathit{upper}}<\alpha\leq \frac1{\mathit{lower}}$, which is an interval including $\frac1\rho$.
# +
St = sturm(p)
l, r = 1/upper, 1/lower
printChanges(changes(St, l, r), l, r)
# -
# ### POST
#
# Node 1 is less important than node 4 when $\alpha = \frac1{\mathit{upper}}$.
# +
p = S(r_post[4] - r_post[1])
print("Node 1 is less important than node 4 when ⍺ = 1/upper for k", lower_bound(solve(p(a=1/upper)(k=x)>0,x)))
# -
# It is also less important when $\frac1{\mathit{upper}}<\alpha\leq \frac1{\mathit{lower}}$, which is an interval including $\frac1\rho$.
# +
St = sturm(p)
l, r = 1/upper, 1/lower
printChanges(changes(St, l, r), l, r)
# -
# # Statements Outside Interval of Validity
# ## Score Comparison
# +
p = S(r_pre[1] - r_pre[0])
St = sturm(p)
l, r = 0, 1/(k+2/k)
printChanges(changes(St, l, r), l, r)
l, r = 1/(k+2/k), 1/(k+1/k)
printChanges(changes(St, l, r), l, r)
l, r = 1/(k+1/k), 1/upper
printChanges(changes(St, l, r), l, r)
# -
# ## Rank Counterexample
# ### PRE
# +
p = S(r_pre[1] - r_pre[4])
St = sturm(p)
l, r = 0, 1/(k+2/k)
printChanges(changes(St, l, r), l, r)
l, r = 1/(k+2/k), 1/(k+1/k)
printChanges(changes(St, l, r), l, r)
l, r = 1/(k+1/k), 1/upper
printChanges(changes(St, l, r), l, r)
# -
# ### POST
#
# +
p = S(r_post[4] - r_post[1])
St = sturm(p)
l, r = 0, 1/(k+2/k)
printChanges(changes(St, l, r), l, r)
l, r = 1/(k+2/k), 1/(k+1/k)
printChanges(changes(St, l, r), l, r)
l, r = 1/(k+1/k), 1/upper
printChanges(changes(St, l, r), l, r)
# -
# ## Demotion when $\frac1{k+\frac2k}\leq\alpha\leq \frac1{\mathit{lower}}$
# Not counting node 0 and node 4, there are two nodes more important than node 1 before adding the edge when $\alpha=\frac1{k+\frac2k}$.
for i in range(7):
if i == 0 or i == 1 or i == 4:
continue
p = S(r_pre[i] - r_pre[1])
if p == 0:
print("Node", i, "has the same score of node 1")
else:
try:
kmin = lower_bound(solve(p(a=1/(k+2/k))(k=x)>0,x))
print("Node", i, "is ultimately more important than node 1 for k", kmin)
except:
print("Node", i, "is ultimately not more important than node 1")
# Not counting node 0 and node 4, there are two nodes more important than node 1 also when $\frac1{k+\frac2k}<\alpha\leq \frac1{\mathit{lower}}$, which is an interval including $\frac1\rho$.
# +
l, r = 1/(k+2/k), 1/lower
for i in range(7):
if i == 0 or i == 1 or i == 4:
continue
p = S(r_pre[1] - r_pre[i])
St = sturm(p)
printChanges(changes(St, l, r), l, r)
# -
# There are four nodes more important than node 1 after adding the edge when $\alpha=\frac1{k+\frac2k}$.
for i in range(7):
if i == 1:
continue
p = S(r_post[i] - r_post[1])
if p == 0:
print("Node", i, "has the same score of node 1")
else:
try:
kmin = lower_bound(solve(p(a=1/(k+2/k))(k=x)>0,x))
print("Node", i, "is ultimately more important than node 1 for k ", kmin)
except:
print("Node", i, "is ultimately not more important than node 1")
# There are four nodes more important than node 1 also when $\frac1{k+\frac2k}<\alpha\leq \frac1{\mathit{lower}}$, which is an interval including $\frac1\rho$.
# +
l, r = 1/(k+2/k), 1/lower
for i in range(7):
if i == 1:
continue
p = S(r_post[i] - r_post[1])
St = sturm(p)
printChanges(changes(St, l, r), l, r)
# -
# ## Existence of Further Regions (Top and Bottom Violations)
# When $\frac1{k + \frac2k}<\alpha\leq \frac1{\mathit{upper}}$ in the original graph the relative importance of node 0 and node 1 flips twice, as well as the relative importance of node 1 and node 4. First we have the first flip of importance of node 0 and node 1, then the first flip of importance of node 1 and node 4, then the second flip of importance of node 1 and node 4, and finally the second flip of importance of node 0 and node 1.
# +
p = S((r_pre[1] - r_pre[4]) - (r_pre[1] - r_pre[0]))
print("The score delta between node 1 and node 4 dominates the score delta between node 1 and node 0 when ⍺ = 1/(k+2/k) for k", lower_bound(solve(p(a=1/(k+1/k))(k=x)>0,x)))
St = sturm(p)
l, r = 1/(k+2/k), 1/upper
printChanges(changes(St, l, r), l, r)
# -
# # NetworkX Graphs
def getGH(kk):
"""
Returns the networkx graphs of $G_k$ and $G_k'$.
Args:
the value of k
Returns:
the graph $G_k$, the graph $G_k'$ and a dictionary whose keys are the nodes of the graph and whose values
are the node names suitable to be passed to the draw function.
"""
labels={}
G=nx.Graph()
G.add_nodes_from([0,1])
for i in range(4):
labels[i]=i
G.add_nodes_from([0, 1, 2, 3])
G.add_edges_from([(0,3), (3,2), (2,1)])
for i in range(kk):
labels["4_{}".format(i)]=4
labels["6_{}".format(i)]=6
G.add_edge("4_{}".format(i),0)
G.add_edge("6_{}".format(i),2)
for j in range(i+1, kk):
G.add_edge("4_{}".format(i),"4_{}".format(j))
G.add_edge("6_{}".format(i),"6_{}".format(j))
for i in range((kk-1)*(kk-2)):
labels["5_{}".format(i)]=5
G.add_edge("5_{}".format(i),1)
H=G.copy()
H.add_edge(0,1,color='r')
return G, H, labels
G,H,labels=getGH(6)
nx.draw(H, nx.kamada_kawai_layout(H), with_labels=True, labels=labels, node_color=["red"]*2+["lightblue"]*(len(H)-2))
# Let's try to compute explicitly Katz's index for given values of $k$ and $\alpha$.
kk=8
aa=1/(kk+2/kk)+0.0
G,H,labels=getGH(kk)
c=nx.katz_centrality(G, alpha=aa, normalized=False)
cp=nx.katz_centrality(H, alpha=aa, normalized=False)
for xx in [0,1,2,3,"4_0","5_0","6_0"]:
print("{}\t{:.4f}\t{:.4f}".format(xx, c[xx], cp[xx]))
# Check that this is equivalent to its symbolic counterpart.
d=det_pre(a=aa)(k=kk)
dp=det_post(a=aa)(k=kk)
c=r_pre(a=aa)(k=kk).numpy(float)
cp=r_post(a=aa)(k=kk).numpy(float)
print("Determinant pre: {:.6f}\nDeterminant post: {:.6f}".format(d,dp))
for xx in range(7):
print("{}\t{:.4f}\t{:.4f}".format(xx, c[xx]/d, cp[xx]/dp))
# # Numerical Examples
# An example of bottom violation.
kk=60
aa=1/(kk+3/(4*kk))-0.0000000501
c=(r_pre/det_pre)(a=aa)(k=kk).numpy(float)
cp=(r_post/det_post)(a=aa)(k=kk).numpy(float)
for xx in [0,1,4]:
print("{}\t{:.10f}\t{:.10f}".format(xx, c[xx], cp[xx]))
# An example with $\alpha=\frac1{k+\frac2k}$.
kk=60
aa=1/(kk+2/kk)
c=(r_pre/det_pre)(a=aa)(k=kk).numpy(float)
cp=(r_post/det_post)(a=aa)(k=kk).numpy(float)
for xx in [0,1,4]:
print("{}\t{:.10f}\t{:.10f}".format(xx, c[xx], cp[xx]))
# An example of bottom violation in the second interval.
kk=60
aa=0.01666036
c=(r_pre/det_pre)(a=aa)(k=kk).numpy(float)
cp=(r_post/det_post)(a=aa)(k=kk).numpy(float)
for xx in [0,1,4]:
print("{}\t{:.10f}\t{:.10f}".format(xx, c[xx], cp[xx]))
# An example of top violation in the second interval.
kk=60
aa=0.0166602
c=(r_pre/det_pre)(a=aa)(k=kk).numpy(float)
cp=(r_post/det_post)(a=aa)(k=kk).numpy(float)
for xx in [0,1,4]:
print("{}\t{:.10f}\t{:.10f}".format(xx, c[xx], cp[xx]))
# # Graphics
# Behavior of the relative importance of nodes 0, 1 and 4 in the whole interval $0\leq\alpha<\frac1{\rho_k'}$.
kk=54
rhop = max(np.linalg.eig(A_post(k=kk))[0])
xx = np.linspace(0, (1/rhop)*1.00001, 100)
yy = [r_pre[1](k=kk)(a=x) - r_pre[4](k=kk)(a=x) for x in xx]
zz = [r_post[1](k=kk)(a=x) - r_post[4](k=kk)(a=x) for x in xx]
tt = [r_pre[1](k=kk)(a=x) - r_pre[0](k=kk)(a=x) for x in xx]
fig = plt.pyplot.figure(figsize=(14,8))
plt.pyplot.plot(xx, yy, color = "blue", label = "pre(1)-pre(4)")
plt.pyplot.plot(xx, zz, color = "red", label = "post(1)-post(4)")
plt.pyplot.plot(xx, tt, color = "green", label = "pre(1)-pre(0)")
plt.pyplot.plot(xx, [0]*len(xx))
plt.pyplot.axvline(x = 1/rhop, color = "black")
fig.legend(loc = "upper left")
fig.show()
# Behavior of the relative importance of nodes 0, 1 and 4 in the interval $\frac1{k+\frac2k}\leq\alpha<\frac1{\rho_k'}$.
# +
kk=54
rho = max(np.linalg.eig(A_pre(k=kk))[0])
rhop = max(np.linalg.eig(A_post(k=kk))[0])
xx = np.linspace(1/(kk+2.1/kk), (1/rho)*1.00001, 100)
yy = [r_pre[1](k=kk)(a=x) - r_pre[4](k=kk)(a=x) for x in xx]
zz = [r_post[1](k=kk)(a=x) - r_post[4](k=kk)(a=x) for x in xx]
tt = [r_pre[1](k=kk)(a=x) - r_pre[0](k=kk)(a=x) for x in xx]
fig = plt.pyplot.figure(figsize=(14,8))
plt.pyplot.plot(xx, yy, color = "blue", label = "pre(1)-pre(4)")
plt.pyplot.plot(xx, zz, color = "red", label = "post(1)-post(4)")
plt.pyplot.plot(xx, tt, color = "green", label = "pre(1)-pre(0)")
plt.pyplot.plot(xx, [0]*len(xx))
plt.pyplot.axvline(x = 1/rho, color = "blue")
plt.pyplot.axvline(x = 1/rhop, color = "black")
plt.pyplot.axvline(x = 1/(kk+2/kk), color = "yellow")
fig.legend(loc = "upper left")
fig.show()
# -
# Behavior of the relative importance of nodes 0, 1 and 4 in the interval where the importance flips happen.
kk=54
rhop = max(np.linalg.eig(A_post(k=kk))[0])
xx = np.linspace(.99975/rhop, (1/rhop), 100)
yy = [r_pre[1](k=kk)(a=x) - r_pre[4](k=kk)(a=x) for x in xx]
tt = [r_pre[1](k=kk)(a=x) - r_pre[0](k=kk)(a=x) for x in xx]
fig = plt.pyplot.figure(figsize=(14,8))
plt.pyplot.plot(xx, yy, color = "blue", label = "pre[1] - pre[4]")
plt.pyplot.plot(xx, tt, color = "green", label = "pre[1] - pre[0]")
plt.pyplot.plot(xx, [0]*len(xx))
plt.pyplot.axvline(x = 1/rhop, color = "black")
fig.legend(loc = "upper left")
fig.show()
| eigenvector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
df = pd.read_csv("FinalMerge.csv",encoding="latin1")
df_act_dir = pd.read_csv("FinalClean.csv",encoding="latin1")
df_act_dir = df_act_dir[["actor_popularity","director_popularity"]]
df = pd.concat([df,df_act_dir],axis=1)
df.shape
df.isnull().sum()
col_replace = {"Actors":"actors","Country":"country","Director":"director","Genre":"genre",
"IMDB.Rating":"imdb_rating","IMDB.Votes":"imdb_votes","Language":"language",
"Released":"released","Runtime":"runtime","Year":"year",
"Production":"production","Rated":"rated"}
df = df.drop(["Unnamed: 0","imdb_id","Title","X.x","X.y"],axis=1).rename(columns=col_replace)
# drop_missing_value
mis_val_col = ["actors","director","genre","imdb_votes","runtime","imdb_rating","language"]
for col in mis_val_col:
df = df.drop(df[df[col].isnull()].index)
df.isnull().sum(),df.shape
num_feat = []
cate_feat = []
for i in df.columns:
if (df[i]).dtype == "int64" or (df[i]).dtype == "float64":
num_feat.append(i)
else:
cate_feat.append(i)
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
# ## Numeric features
# ### imdb_rating
sns.distplot(df["imdb_rating"])
# df["imdb_rating"].hist()
# ### budget
import math
df["budget"] = df["budget"].map(lambda x:math.log(x))
# df["budget"].hist()
sns.distplot(df["budget"])
# ### revenue (target)
# df["revenue"].hist()
# sns.distplot(df["revenue"],kde=False)
df["revenue"] = df["revenue"].map(lambda x:math.log(x))
sns.distplot(df["revenue"])
sns.jointplot(x="budget",y="revenue",data=df,kind="reg")
# ## Categorical features
# ### country
# # regions/countries involved in producing movies
df["country"] = df["country"].map(lambda x:len(str(x).split(",")))
# df["country"].value_counts().plot.bar(figsize=(16,6))
# df["country"].nunique()
# /df.shape[0]).plot.bar(figsize=(16,6))
# sns.boxplot(x="country",y="revenue",data=df)
# +
# df["country"].value_counts()
# df = df.drop("country",axis=1)
# -
num_feat.append("country")
print(num_feat,len(num_feat))
cate_feat.remove("country")
print(cate_feat, len(cate_feat))
# ### genre
df = pd.concat([df, df['genre'].str.get_dummies(sep=', ')], axis=1)
df['Thriller'] = df[['Thriller', 'Horror']].sum(axis=1)
df['Fantasy'] = df[['Fantasy', 'Sci-Fi']].sum(axis=1)
df['Other_genre'] = df[['Music', 'History', 'Sport', 'War', 'Western', 'Musical', 'Documentary', 'News']].sum(axis=1)
df.drop(['Music', 'History', 'Sport', 'War', 'Western', 'Musical', 'Documentary', 'News', 'Horror', 'Sci-Fi'],
axis=1, inplace=True)
genre_lst = list(df)[19:32]
for x in genre_lst:
#print(x)
df.loc[df['%s' % x] > 1, '%s' % x] = 1
#print(df['%s' % x].value_counts())
df = df.drop("genre",axis=1)
genre_dict = {}
for i in df.columns[14:]:
genre_dict.update({i:i.lower()})
df = df.rename(columns = genre_dict)
for i in df.columns[14:]:
num_feat.append(i)
print(num_feat,len(num_feat))
cate_feat.remove("genre")
print(cate_feat, len(cate_feat))
# ### imdb_votes
df["imdb_votes"] = df["imdb_votes"].astype(str).str.replace("\D+","").astype(int)
# df["imdb_votes"].hist()
sns.distplot(df["imdb_votes"],kde=False)
df["imdb_votes"] = df["imdb_votes"].map(lambda x:math.log(x))
sns.distplot(df["imdb_votes"])
num_feat.append("imdb_votes")
cate_feat.remove("imdb_votes")
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
# ### language
# list length
df["language"] = df["language"].map(lambda x:len(str(x).split(",")))
df["language"].value_counts()
num_feat.append("language")
cate_feat.remove("language")
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
# ### production
# frequency encoding
df["production"] = df["production"].replace(np.nan, "Unknown")\
.map(lambda x: x.split(" ")[0] if len(x) > 1 else x)
# (df["production"].value_counts()/df.shape[0])[:100].plot.bar(figsize=(16,6))
# zip_freq = list(df_2014['addrzip'].value_counts()[:20].index)
# df_2014['addrzip'] = df_2014['addrzip'].map(lambda s:'others' if s not in zip_freq else s)
# list(df_2014['addrzip'].value_counts().index)
# zip_map = {'others':20,'750':0,'945':1,'112':2,'606':3,'300':4,'070':5,'331':6,'100':7,'770':8,
# '900':9,'117':10,'917':11,'104':12,'891':13,'330':14,'852':15,'921':16,'913':17,'926':18,'925':19}
# df_2014['addrzip'] = df_2014['addrzip'].map(lambda s: zip_map.get(s) if s in zip_map else s)
prod_freq = list(df["production"].value_counts()[:20].index)
df["production"] = df["production"].map(lambda s:"other_productions" if s not in prod_freq else s)
prod_counts = df["production"].value_counts()
prod_dict = prod_counts.to_dict()
df["production"] = df["production"].map(lambda s:prod_dict.get(s) if s in prod_dict else s)
df["production"].unique()
# +
# high cardinality: may use frequency encoding
# (df["production"].value_counts()/df.shape[0])[:].sum()
# -
num_feat.append("production")
cate_feat.remove("production")
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
# ### rated
df["rated"].value_counts()
# df["rated"].value_counts().plot.bar()
sns.countplot(x="rated", data=df)
sns.set_style('ticks')
fig, ax = plt.subplots()
# the size of A4 paper
fig.set_size_inches(11.7, 8.27)
sns.boxplot(x="rated", y="revenue", data=df)
df["rated"] = df["rated"].replace(np.nan, "UNRATED")\
.replace("NOT RATED", "UNRATED")
df = pd.concat([df, df['rated'].str.get_dummies(sep=', ')], axis=1)
df.columns[28:]
for i in df.columns[28:]:
num_feat.append(i)
cate_feat.remove("rated")
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
df = df.drop("rated",axis=1)
# ### released
# index of released date col
index = df.columns.get_loc("released")
#change date data to timestamp
release_dates = pd.to_datetime(df["released"])
# released date is weekend of not
weekend_list = []
for each in release_dates:
day_ofweek = each.dayofweek
if day_ofweek >= 4 and day_ofweek <= 6:
tag = 1
else:
tag = 0
weekend_list.append(tag)
# released date is on dump months
dumpmonth_list = []
for each in release_dates:
month = each.month
if month == 12 or month == 1 or month == 2 or month == 8 or month ==9:
tag = 1
else:
tag = 0
dumpmonth_list.append(tag)
df.insert(loc=index+1,column = "released_on_weekend",value=weekend_list)
df.insert(loc=index+2,column = "released_on_dump_month",value=dumpmonth_list)
num_feat.append("released_on_weekend")
num_feat.append("released_on_dump_month")
cate_feat.remove("released")
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
df = df.drop("released",axis=1)
# ### runtime
df["runtime"].dtype
df["runtime"] = df["runtime"].map(lambda x:int(x.strip("min")))
sns.distplot(df["runtime"])
num_feat.append("runtime")
cate_feat.remove("runtime")
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
# ### actors & directors
df = df.drop(["actors","director"],axis=1)
sns.distplot(df["actor_popularity"])
sns.distplot(df["actor_popularity"].map(lambda x:math.log(x)+0.01))
sns.distplot(df["director_popularity"].map(lambda x:math.log(x)))
sns.distplot(df["director_popularity"])
cate_feat.remove("actors")
cate_feat.remove("director")
print(num_feat,len(num_feat))
print(cate_feat, len(cate_feat))
# ## Standardization
x_train = df[df["year"] <= 2013].drop("revenue",axis=1)
x_test = df[df["year"] > 2013].drop("revenue",axis=1)
y_train = df[df["year"] <= 2013]["revenue"]
y_test = df[df["year"] > 2013]["revenue"]
# +
# num_feat.remove("revenue")
# stand_feat = []
# nonstand_feat = []
# for feat in num_feat:
# if X[feat].nunique() > 2:
# stand_feat.append(feat)
# else:
# nonstand_feat.append(feat)
# +
# from sklearn.preprocessing import StandardScaler
# scaler = StandardScaler()
# scaler_feat = scaler.fit_transform(X[stand_feat])
# X_feat = pd.DataFrame(scaler_feat,columns=X[stand_feat].columns)
# pd.concat([X_feat,X[nonstand_feat]],axis=1)
# -
df.shape
fig, ax = plt.subplots()
fig.set_size_inches(16, 10)
sns.heatmap(df.drop("revenue",axis=1).corr())
# +
# sns.pairplot(df.drop("revenue",axis=1))
# -
# ## Regression Model
# +
# from sklearn.model_selection import train_test_split
# x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.3)
# -
from sklearn.linear_model import LinearRegression
lrm = LinearRegression()
lrm.fit(x_train,y_train)
print(lrm.intercept_)
lrm.coef_
cdf = pd.DataFrame(lrm.coef_,x_train.columns,columns=["Coeff"])
predictions = lrm.predict(x_test)
# plt.scatter(y_test, predictions)
sns.distplot((y_test-predictions)) # should be normal distribution
cdf
from sklearn import metrics
print(metrics.mean_absolute_error(y_test, predictions))
print(metrics.mean_squared_error(y_test, predictions))
print(np.sqrt(metrics.mean_squared_error(y_test, predictions)))
| paper/historycode/Code/.ipynb_checkpoints/Data_Analysis-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Conclusion
#
# - This is a Python base notebook
# -
# ## Summary of the whole project
import pandas as pd
results = pd.read_csv('data/model_results.csv', index_col = 0 )
results.reset_index().rename(index= {0: 'fit time', 1: 'score time', 2: 'test accuracy', 3: 'train accuracy', 4: 'test ROC AUC', 5: 'train ROC AUC'})
# Amongst the models, `LGBMClassifier` is the best model. Even though `CatBoostClassifier` has the best accuracy and ROC AUC score, the fit time for`cat_boost` is slow. Fit time is essential as the algorithm is likely to refit every user by the latest song listening history whenever the user wants to update their playlist. For `LGBMClassifier`, the test accuracy is **0.950**, which is 0.004 lower than `CatBoostClassifier`, but the fit time is 20 times shorter.
#
# Therefore, `LGBMClassifier` will be used for the Soptify user behavior prediction.
| spotify_user_behaviour_predictor/7_Conclusion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 範例 : (Kaggle)房價預測
# ***
# - 以下用房價預測資料, 觀察降低資料偏態的影響
# +
# 做完特徵工程前的所有準備
import pandas as pd
import numpy as np
from copy import deepcopy
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
data_path = 'data/data2/'
df_train = pd.read_csv(data_path + 'house_train.csv.gz')
df_test = pd.read_csv(data_path + 'house_test.csv.gz')
train_Y = np.log1p(df_train['SalePrice'])
ids = df_test['Id']
df_train = df_train.drop(['Id', 'SalePrice'] , axis=1)
df_test = df_test.drop(['Id'] , axis=1)
df = pd.concat([df_train,df_test])
df.head()
# +
#只取 int64, float64 兩種數值型欄位, 存於 num_features 中
num_features = []
for dtype, feature in zip(df.dtypes, df.columns):
if dtype == 'float64' or dtype == 'int64':
num_features.append(feature)
print(f'{len(num_features)} Numeric Features : {num_features}\n')
# 削減文字型欄位, 只剩數值型欄位
df = df[num_features]
df = df.fillna(-1)
MMEncoder = MinMaxScaler()
train_num = train_Y.shape[0]
df.head()
# -
# 顯示 LotArea 的散佈圖
import seaborn as sns
import matplotlib.pyplot as plt
sns.distplot(df['LotArea'][:train_num])
plt.show()
# 計算基礎分數
df_mm = MMEncoder.fit_transform(df)
train_X = df_mm[:train_num]
estimator = LinearRegression()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# +
# 將 LotArea 取 log1p 後, 看散佈圖, 並計算分數
df_fixed = deepcopy(df)
df_fixed['LotArea'] = np.log1p(df_fixed['LotArea'])
sns.distplot(df['LotArea'][:train_num])
plt.show()
df_fixed = MMEncoder.fit_transform(df_fixed)
train_X = df_fixed[:train_num]
estimator = LinearRegression()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# +
# 將 LotArea 取 boxcox 後, 看散佈圖, 並計算分數
from scipy import stats
df_fixed = deepcopy(df)
df_fixed['LotArea'] = stats.boxcox(df_fixed['LotArea'] + 1, lmbda=0)[0]
sns.distplot(df['LotArea'][:train_num])
plt.show()
df_fixed = MMEncoder.fit_transform(df_fixed)
train_X = df_fixed[:train_num]
estimator = LinearRegression()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# +
# 將 LotArea 取 sqrt (box-cox: lmbda=0.5) 後, 看散佈圖, 並計算分數
from scipy import stats
df_fixed = deepcopy(df)
df_fixed['LotArea'] = stats.boxcox(df_fixed['LotArea'], lmbda=0.5)[0]
sns.distplot(df['LotArea'][:train_num])
plt.show()
df_fixed = MMEncoder.fit_transform(df_fixed)
train_X = df_fixed[:train_num]
estimator = LinearRegression()
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# -
# # 作業1
# * 試著在鐵達尼的票價 (Fare) 欄位中使用對數去偏 (log1p) , 結果是否更好?
#
# # 作業2
# * 最後的 boxcox 區塊直接執行會造成錯誤, 起因為輸入值有負值, 請問如何修正後可以使用 boxcox?(Hint : 試圖修正資料)
| Day_021_Reduce_Skewness.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
notas = c(59, 61, 74, 84, 86, 96, 92, 53, 66, 58, 49, 71, 72, 73, 66, 91, 68, 79, 79, 64, 84, 86, 79, 88, 59, 98, 82, 69, 75)
# faixa de valores
range(notas)
breaks <- seq(40, 100, by=10)
breaks
# notas por intervalo
notas.cut = cut(notas, breaks, right = FALSE)
# Frequência
notas.freq <- table(notas.cut)
notas.freq
# Distribuição de frequências
cbind(notas.freq)
# Histograma das notas da turma
hist(notas,right=FALSE,main="Distribuição de
Frequência",xlab=" Notas",ylab="Frequência")
# +
# Histograma das notas da turma
colors <- c("red","yellow","green","pink","orange",
"blue", "violet", "cyan")
# adicionando as cores ao histograma
hist(notas,right=FALSE,main="Distribuição de
Frequência",xlab=" Notas",ylab="Frequência", col = colors)
# -
# Frequência relativa
notas.relfreq <- notas.freq/length(notas)
notas.relfreq
# Frequência relativa
options(digits=1)
notas.relfreq <- notas.freq/length(notas)
notas.relfreq
# Duração de chamadas VoIP
vc <- c(180,165,178,189,193)
vc
# Calculando a média aritmética
mean(vc)
# Cálculo do coeficiente de rendimento (Evoney - Últimas disciplinas pagas)
notas = c(8.31, 9.06, 8.93, 6.02)
creditos = c(6, 4, 4, 5)
cr = weighted.mean(notas, creditos)
cr
# Valores calculados (verificando discrepâncias)
x = c(18, 13, 11, 8, 10, 16, 12)
# Calculando a média aritmética
mean(x)
# Valores calculados (trocando 16 por 58)
x = c(18, 13, 11, 8, 10, 58, 12)
mean(x)
# Valores calculados
x = c(18, 13, 11, 8, 10, 58, 12)
install.packages("psych") # Install psych package
library("psych") # Load psych package
# Média geométrica
geometric.mean(x)
# Valores calculados para velocidades de ida e volta
v = c(60,30)
# Calculando a média harmônica
harmonic.mean(v)
# Valores calculados (entendo a mediana para tamanho ìmpar de amostra)
amigos = c(108, 103, 252, 121, 93, 57, 40, 53, 11, 116, 98)
# Calculando a mediana
median(amigos)
# Valores calculados (reajamento para tamanho par de amostra)
amigos = c(108,103,121,93,57,40,53,11,116,98)
# Calculando a mediana
median(amigos)
#---- Calculating the mode ----
x=c(1,4,2,3,4,6,3,7,8,5,4,3)
moda(x)
| Aula_AVD_Evoney_05_04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from scipy.stats import beta, t, norm
from scipy.special import btdtri
import matplotlib.pyplot as plt
# +
p = 0.5
n = 10
success = np.random.binomial(p=p, n=n)
failure = n - success
print("success = %i, failure = %i"%(success, failure))
# +
prior_a = 1
prior_b = 1
a = prior_a + success
b = prior_b + failure
rv = beta(a, b)
b_up = btdtri(a, b, 0.975)
b_lo = btdtri(a, b, 0.025)
print("95%% credible interval: [%.3f, %.3f]"%(b_lo, b_up))
# +
p_hat = success / n
se = np.sqrt(p_hat * (1 - p_hat) / n)
f_up = p_hat + 1.96 * se
f_lo = p_hat - 1.96 * se
print("95%% confidence interval: [%.3f, %.3f]"%(f_lo, f_up))
# +
fig, ax = plt.subplots()
x = np.linspace(0, 1, 1000)
ax.plot(x, rv.pdf(x), color='blue')
x = np.linspace(p_hat - 5 * se, p_hat + 5 * se, 1000)
ax.plot(x, norm.pdf(x, loc=p_hat, scale=se), color='r')
ax.legend(["Beta(\u03B1=%i, \u03B2=%i)"%(a, b), "N(\u03BC=%.2f, \u03C3=%.2f)"%(p_hat, se)], frameon=False)
ax.set_ylabel("density")
# title = "success %i, failure %i"%(success, failure)
# ax.set_xlabel(title)
# minimalism style
# ax.tick_params(top=False, bottom=False, left=False, right=False, labelleft=False, labelbottom=True)
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.savefig("outputs/ci_compare_2.png")
# +
fig = plt.figure(figsize=(14, 4))
grid = plt.GridSpec(1, 2, hspace=0.2, wspace=0.2)
ax1 = fig.add_subplot(grid[:, :1])
ax2 = fig.add_subplot(grid[:, 1:])
# bayesian credible interval
x = np.linspace(0, 1, 1000)
ax1.plot(x, rv.pdf(x), color='blue')
# plot prior if necessary
rv_prior = beta(prior_a, prior_b)
ax1.plot(x, rv_prior.pdf(x), alpha=0.2)
# bayesian credible interval
right_line = ax1.axvline(b_up, lw=2, color='blue')
left_line = ax1.axvline(b_lo, lw=2, color='blue')
fill = ax1.axvspan(b_lo, b_up, alpha=0.2, color='blue')
ax1.set_xlabel("95%% credible interval: [%.3f, %.3f]"%(b_lo, b_up))
ax1.legend(["Beta(\u03B1=%i, \u03B2=%i)"%(a, b), "flat prior"], frameon=False)
ax1.spines["top"].set_visible(False)
ax1.spines["right"].set_visible(False)
ax1.spines["left"].set_visible(False)
ax1.spines["bottom"].set_visible(False)
# frequentist confidence interval
ax2.plot(x, norm.pdf(x, loc=p_hat, scale=se), color='r')
right_line = ax2.axvline(f_up, lw=2, color='r')
left_line = ax2.axvline(f_lo, lw=2, color='r')
fill = ax2.axvspan(f_lo, f_up, alpha=0.2, color='r')
ax2.set_xlabel("95%% confidence interval: [%.3f, %.3f]"%(f_lo, f_up))
ax2.legend(["N (\u03BC=%.2f, \u03C3=%.2f)"%(p_hat, se)], frameon=False)
ax2.spines["top"].set_visible(False)
ax2.spines["right"].set_visible(False)
ax2.spines["left"].set_visible(False)
ax2.spines["bottom"].set_visible(False)
plt.savefig("outputs/ci_compare.png")
| concepts/credible_interval.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="37XDME5ssAep"
# # CASE STUDY: FASHION CLASS CLASSIFICATION USING MNIST DATASET
#
# + [markdown] colab_type="text" id="m6woz_icsAet"
# # STEP #1: PROBLEM STATEMENT AND BUSINESS CASE
# + [markdown] colab_type="text" id="qpSYyjDGsAex"
# Fashion training set consists of 70,000 images divided into 60,000 training and 10,000 testing samples. Dataset sample consists of 28x28 grayscale image, associated with a label from 10 classes.
#
# The 10 classes are as follows:
# 0. T-shirt/top
# 1. Trouser
# 2. Pullover
# 3. Dress
# 4. Coat
# 5. Sandal
# 6. Shirt
# 7. Sneaker
# 8. Bag
# 9. Ankle boot
#
# Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255.
# + [markdown] colab_type="text" id="tq_i-gcjsAey"
# # STEP #2: IMPORTING DATA
# + colab={} colab_type="code" id="4MgyCG0HsAe1"
# import libraries
import pandas as pd # Import Pandas for data manipulation using dataframes
import numpy as np # Import Numpy for data statistical analysis
import matplotlib.pyplot as plt # Import matplotlib for data visualisation
import seaborn as sns #Visualization
# + colab={"base_uri": "https://localhost:8080/", "height": 122} colab_type="code" id="JTQLgn_MuqbV" outputId="9cda24d0-a965-41f4-95f3-94c9e263c68e"
#Running or Importing .py Files with Google Colab
from google.colab import drive
drive.mount('/content/drive/')
# + colab={"base_uri": "https://localhost:8080/", "height": 1969} colab_type="code" id="HFLFplOxsAe_" outputId="eb0aa4f1-ca7f-4328-b91c-700ec7436055"
# Dataframes creation for both training and testing datasets
fashion_train_df = pd.read_csv("/content/drive/My Drive/app/fashion-mnist_train.csv",sep=',')
fashion_test_df = pd.read_csv("/content/drive/My Drive/app/fashion-mnist_test.csv", sep = ',')
fashion_train_df
# + [markdown] colab_type="text" id="o608ETtdsAfG"
# # STEP #3: VISUALIZATION OF THE DATASET
# + colab={"base_uri": "https://localhost:8080/", "height": 233} colab_type="code" id="hKNoRcx5sAfI" outputId="cc8a6d85-9ce7-4e5a-becb-fc10f4b2832b"
# Let's view the head of the training dataset
# 784 indicates 28x28 pixels and 1 coloumn for the label
# After you check the tail, 60,000 training dataset are present
fashion_train_df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 233} colab_type="code" id="YjO7eyFzsAfO" outputId="4199ec6e-4889-4187-a325-3fa902f4e57e"
# Let's view the last elements in the training dataset
fashion_train_df.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 233} colab_type="code" id="FhqEOMqqsAfV" outputId="de9daa1e-318f-4ed3-e1c5-7e5aa248c2e4"
# Let's view the head of the testing dataset
fashion_test_df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 233} colab_type="code" id="AT4IOei_sAff" outputId="5ec0d5c6-a417-4c54-89d9-86eb48de8709"
# Let's view the last elements in the testing dataset
fashion_test_df.tail()
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="SssQxdOJsAfo" outputId="7cd1a612-44d2-47b8-af5d-0193ad908c15"
print(f'Training Sample:{fashion_train_df.shape}')
print(f'Testing Sample:{fashion_test_df.shape}')
# + colab={"base_uri": "https://localhost:8080/", "height": 289} colab_type="code" id="WgOx1mZOsAfv" outputId="4c2577b7-66cd-4079-9c4d-275ebefdd349"
# Create training and testing arrays to visualize data
training = np.array(fashion_train_df, dtype='float32')
testing = np.array(fashion_test_df, dtype='float32')
print(testing)
print('\n')
print(training)
# + colab={"base_uri": "https://localhost:8080/", "height": 364} colab_type="code" id="5J_OQbVAsAgJ" outputId="75517e7a-fae8-46c2-af5a-4b10f845b8e6"
# Let's view some images!
import random
i = random.randint(1,60000) # select any random index from 1 to 60,000
plt.imshow(training[i,1:].reshape((28,28))) # reshape and plot the image
#Any row and all columns
plt.imshow(training[i,1:].reshape((28,28)),cmap = 'gray') # reshape and plot the image
label = training[i,0]
print(f'Image Label:{label}')
# Remember the 10 classes decoding is as follows:
# 0 => T-shirt/top
# 1 => Trouser
# 2 => Pullover
# 3 => Dress
# 4 => Coat
# 5 => Sandal
# 6 => Shirt
# 7 => Sneaker
# 8 => Bag
# 9 => Ankle boot
# + colab={"base_uri": "https://localhost:8080/", "height": 1146} colab_type="code" id="26GDSE6QsAgc" outputId="300c26b9-5736-4059-9e8a-294f2a7dc8d8"
# Let's view more images in a grid format
# Define the dimensions of the plot grid
W_grid = 10
L_grid = 10
# fig, axes = plt.subplots(L_grid, W_grid)
# subplot return the figure object and axes object
# we can use the axes object to plot specific figures at various locations
fig, axes = plt.subplots(L_grid, W_grid, figsize = (20,20))
axes = axes.ravel() # flaten the 10 x 10 matrix into 100 array
n_training = len(training) # get the length of the training dataset
# Select a random number from 0 to n_training
for i in np.arange(0, W_grid * L_grid): # create evenly spaces variables
# Select a random number
index = np.random.randint(0, n_training)
# read and display an image with the selected index
axes[i].imshow( training[index,1:].reshape((28,28)) )
axes[i].set_title(training[index,0], fontsize = 8)
axes[i].axis('off')
plt.subplots_adjust(hspace=0.4)
# Remember the 10 classes decoding is as follows:
# 0 => T-shirt/top
# 1 => Trouser
# 2 => Pullover
# 3 => Dress
# 4 => Coat
# 5 => Sandal
# 6 => Shirt
# 7 => Sneaker
# 8 => Bag
# 9 => Ankle boot
# + [markdown] colab_type="text" id="lyaU_XQVsAgh"
# # STEP #4: TRAINING THE MODEL
# + colab={"base_uri": "https://localhost:8080/", "height": 289} colab_type="code" id="mFn5sIdMsAgk" outputId="a56a9822-0fbc-48b6-dd8f-ee1b84d8b727"
# Prepare the training and testing dataset
X_train = training[:,1:]/255 # Training Normalization
y_train = training[:,0]
X_test = testing[:,1:]/255 # Testing Normalization
y_test = testing[:,0]
print(X_train)
print('\n')
print(X_test)
# + colab={} colab_type="code" id="QKFgfUcVsAgq"
# Import train_test_split from scikit library
from sklearn.model_selection import train_test_split
X_train, X_validate, y_train, y_validate = train_test_split(X_train, y_train, test_size = 0.2,random_state =0)
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="ykrXSJ7hsAg2" outputId="471d674e-24c8-4a2e-f070-74adf34a36fb"
print(f'Training Sample:{X_train.shape}')
print(f'Testing Sample:{y_train.shape}')
# + colab={} colab_type="code" id="oprVfOVqsAhJ"
# * unpack the tuple
# To make Image Network
X_train = X_train.reshape(X_train.shape[0], *(28, 28, 1))
X_test = X_test.reshape(X_test.shape[0], *(28, 28, 1))
X_validate = X_validate.reshape(X_validate.shape[0], *(28, 28, 1))
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="Lf0K-WoYsAhR" outputId="27d0e183-300d-417e-ac05-9e1241f1c6bc"
print(f'Training Sample:{X_train.shape}')
print(f'Test Sample Size:{X_test.shape}')
print(f'Validate Size:{X_validate.shape}')
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="rR-k5rhnsAhs" outputId="b3ab52fd-b19a-4a43-9efb-08a35ab026f4"
# Import Keras Library
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
from keras.optimizers import Adam #Adam Optimizer
from keras.callbacks import TensorBoard
# + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="uoXNrMfdsAhx" outputId="8849edc0-c4dc-4dbf-c62c-1738cd04f8ad"
#Model
cnn_model = Sequential()
# Try 64 fliters/Kernal Detectors With Dropout
cnn_model.add(Conv2D(64,3, 3, input_shape = (28,28,1), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size = (2, 2))) #Pooling Layer
cnn_model.add(Dropout(0.25))
cnn_model.add(Flatten()) #Flattening
cnn_model.add(Dense(output_dim = 32, activation = 'relu')) #Dense Function
cnn_model.add(Dense(output_dim = 10, activation = 'sigmoid')) #Output Layer
# + colab={} colab_type="code" id="eZpaDIH0sAh1"
cnn_model.compile(loss ='sparse_categorical_crossentropy', optimizer=Adam(lr=0.001),metrics =['accuracy']) #Adam Optimizer
# + colab={"base_uri": "https://localhost:8080/", "height": 1768} colab_type="code" id="c841M21CsAh-" outputId="0fbf3989-c712-43da-bf5e-282dc751a779"
epochs = 50 #Taking Dataset and updating the weights
history = cnn_model.fit(X_train,y_train,
batch_size = 512,
nb_epoch = epochs,
verbose = 1,
validation_data = (X_validate, y_validate))
# + colab={} colab_type="code" id="CMQOdk<PASSWORD>Gn"
#Model
cnn_model = Sequential()
# Try 32 fliters/Kernal Detector With Dropout
cnn_model.add(Conv2D(32,3, 3, input_shape = (28,28,1), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size = (2, 2))) #Pooling Layer
cnn_model.add(Dropout(0.25))
cnn_model.add(Flatten()) #Flattening
cnn_model.add(Dense(output_dim = 32, activation = 'relu')) #Dense Function
cnn_model.add(Dense(output_dim = 10, activation = 'sigmoid')) #Output Layer
cnn_model.compile(loss ='sparse_categorical_crossentropy', optimizer=Adam(lr=0.001),metrics =['accuracy']) #Adam Optimizer
epochs = 50 #Taking Dataset and updating the weights
history = cnn_model.fit(X_train,y_train,
batch_size = 512,
nb_epoch = epochs,
verbose = 1,
validation_data = (X_validate, y_validate))
# + [markdown] colab_type="text" id="x2DbK5KcsAiH"
# # STEP #5: EVALUATING THE MODEL
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="vtJHqJlAsAiK" outputId="02ee9762-16b9-424c-db05-bcc42acd1704"
evaluation = cnn_model.evaluate(X_test, y_test)
print('Test Accuracy : {:.3f}'.format(evaluation[1]))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="gk7KRYQhsAiS" outputId="1dfe5b1a-ac8a-4666-f7eb-6267f3242e1c"
# get the predictions for the test data
predicted_classes = cnn_model.predict_classes(X_test)
print(predicted_classes)
# + colab={"base_uri": "https://localhost:8080/", "height": 1149} colab_type="code" id="MT1u3kYcsAic" outputId="e3025f86-d14f-4964-f48a-8c97c62b3bde"
#Pick 100 images
L = 10
W = 10
fig, axes = plt.subplots(L, W, figsize = (20,20))
axes = axes.ravel() # Flatten our axis array
for i in np.arange(0, L * W):
axes[i].imshow(X_test[i].reshape(28,28))
axes[i].set_title("Prediction Class = {:0.1f}\n True Class = {:0.1f}".format(predicted_classes[i], y_test[i])) #Prediction Label and True Label
axes[i].axis('off')
plt.subplots_adjust(wspace=0.5)
# + colab={"base_uri": "https://localhost:8080/", "height": 609} colab_type="code" id="5PDgnAG_sAik" outputId="051b1334-9528-4217-986a-5de820089588"
#Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, predicted_classes)
plt.figure(figsize = (14,10))
sns.heatmap(cm, annot=True) # Sum the diagonal element to get the total true correct values
# + colab={"base_uri": "https://localhost:8080/", "height": 272} colab_type="code" id="5deZ9LkysAio" outputId="2f659a45-3801-481e-a5b2-40bc9f422ca4"
#Classification Report
from sklearn.metrics import classification_report
num_classes = 10
target_names = ["Class {}".format(i) for i in range(num_classes)]
print(classification_report(y_test, predicted_classes, target_names = target_names))
# + [markdown] colab_type="text" id="fblKz4P3iLvu"
# #Conclusion
#
# We get the accuracy of fashion class is 91%.Class 6(Shirt) accuracy is very less. We can improve accuracy by adding more features detectors/filters or adding a dropout.Droupout is a regularization techniques for reducing overfits in neural networks refers to dropping out units in a neural network.
#
| Data-Science-Portfolio-master/Deep Learning/Fashion_Class_Classification_using_MNIST_dataset.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Exercise 11. Two-sample tests/Interval estimates
# ## <NAME>, <NAME>
# # Overview of confidence intervals and hypothesis tests and their constructions
#
# ## Two columns of data - pairs/independent
#
# - Paired data indicates data that are taken as two measurements of the same entities -> data columns are dependent.
# - For paired data, we calculate the difference between the columns(or another function according to the input) and use one-sample tests for this difference.
#
# - If there is no dependency between values in the two columns the data are independent.
# - Two-sample test is needed
# ### Examples of paired data:
#
# - measuring bulbs at two different temperatures(if each piece is measured twice - at temperature 1 and temperature 2)
#
# - be careful here, it can happen that the tests are eg. destructive and it is not possible to measure twice the same entity(product). Then we would consider two independent selections, each for one type of measurement -> independent data columns -> two-sample tests
#
# - measurement of the patient's blood values before and after drug administration
#
# - again pay attention to, for example, drug testing in two groups(placebo/real drug) ->two independent groups -> two-sample tests
# ## In general for two-sample tests/CI
#
# - the test is always tied to the appropriate CI -> same conditions of use
#
# - if the test has conditions of use(eg: normality of data, symmetry of data) then this condition must be met **for both data columns**, if at least one does not meet, we consider the assumption to be broken
#
# - one of the very important assumptions is data independence
#
# - eg: measurement of products of manufacturer A and products of manufacturer B - here it is reasonable to assume that the products of manufacturer A are separate entities from the products of manufacturer B
# ## Two-sample tests/IO - difference of position measures
#
#
# we create test data
data1 = rnorm(n = 30, mean = 105, sd = 10)
data2 = rnorm(n = 30, mean = 100, sd = 10)
boxplot(data1,data2)
# ### Two-sample Student's t-test
#
# - Tests/estimates difference of means: $H_0: \mu_{1} - \mu_{2} = a$
# - requirements:
# - Data normality
# - Homoskedasticity(scatter matching)
# - independence of selections
# - the function must have the parameter var.equal=TRUE
# +
# H0: mu1 - mu2=2
# HA: mu1 - mu2!=2
t.test(x = data1, y = data2, mu = 2, alternative = "two.sided",
var.equal = TRUE, conf.level = 0.95)
# +
# H0: mu1 - mu2=2
# HA: mu1 - mu2>2
t.test(x = data1, y = data2, mu = 2, alternative = "greater",
var.equal = TRUE, conf.level = 0.95)
# +
# H0: mu1 - mu2=2
# HA: mu1 - mu2<2
t.test(x = data1, y = data2, mu = 2, alternative = "less",
var.equal = TRUE, conf.level = 0.95)
# -
# ### Aspin-Welsh test
#
# - Tests/estimates the difference of means: $H_0: \mu_{1} - \mu_{2} = a$
# - requirements:
# - Data normality
# - independence of selections
# - the function must have the parameter var.equal=FALSE
# +
# H0: mu1 - mu2=2
# HA: mu1 - mu2!=2
t.test(x = data1, y = data2, mu = 2, alternative = "two.sided",
var.equal = FALSE, conf.level = 0.95)
# +
# H0: mu1 - mu2=2
# HA: mu1 - mu2>2
t.test(x = data1, y = data2, mu = 0, alternative = "greater",
var.equal = FALSE, conf.level = 0.95)
# +
# H0: mu1 - mu2=2
# HA: mu1 - mu2<2
t.test(x = data1, y = data2, mu = 0, alternative = "less",
var.equal = FALSE, conf.level = 0.95)
# -
# ### Mann-Whitney test
#
# - Tests/estimates difference of medians: $H_0: X_{0.5,1} - X_{0.5,2} = a$
# - requirements:
# - independence of selections
# - (same shape of the distribution)
# - requires conf.int=TRUE, to calculate CI
# +
# H0: X0.5,1 - X0.5,2=2
# HA: X0.5,1 - X0.5,2!=2
wilcox.test(x = data1, y = data2, mu = 2, alternative = "two.sided",
conf.level=0.95, conf.int = TRUE)
# +
# H0: X0.5,1 - X0.5,2=2
# HA: X0.5,1 - X0.5,2>2
wilcox.test(x = data1, y = data2, mu = 2, alternative = "greater",
conf.level=0.95, conf.int = TRUE)
# +
# H0: X0.5,1 - X0.5,2=2
# HA: X0.5,1 - X0.5,2<2
wilcox.test(x = data1, y = data2, mu = 2, alternative = "less",
conf.level=0.95, conf.int = TRUE)
# -
# ## Two-sample tests/CI - proportion of variances
#
# ### F-test
#
# - Tests/estimates the ratio of variances: $H_0: \sigma^2_{1} / \sigma^2_{2} = a$
# - requirements:
# - data normality
# - independence of selections
# +
# H0: sigma1 ^ 2/sigma2 ^ 2=1
# H0: sigma1 ^ 2/sigma2 ^ 2!=1
var.test(x = data1, y = data2, ratio = 1, alternative = "two.sided",
conf.level = 0.95)
# +
# H0: sigma1 ^ 2/sigma2 ^ 2=1
# H0: sigma1 ^ 2/sigma2 ^ 2>1
var.test(x = data1, y = data2, ratio = 1, alternative = "greater",
conf.level = 0.95)
# +
# H0: sigma1 ^ 2/sigma2 ^ 2=1
# H0: sigma1 ^ 2/sigma2 ^ 2<1
var.test(x = data1, y = data2, ratio = 1, alternative = "less",
conf.level = 0.95)
# -
# ### Levene's test
#
# - Tests equality of variances: $H_0: \sigma^2_{1} = \sigma^2_{2}$!
# - requirements:
# - independence of selections
# - requires data in standard data format
# - leveneTest function in the car package
# +
# we produce data in a standard data format
data1.df = as.data.frame(data1)
data1.df$typ = "d1"
colnames(data1.df) = c("data", "typ")
data2.df = as.data.frame(data2)
data2.df$typ = "d2"
colnames(data2.df) = c("data", "typ")
data = rbind(data1.df, data2.df)
data$typ = as.factor(data$typ)
head(data)
# +
# install.packages("car")
# H0: sigma1 ^ 2=sigma2 ^ 2
# HA: sigma1 ^ 2!=Sigma2 ^ 2
car::leveneTest(data$data ~ data$typ)
# -
# ## Two-sample tests/CI - difference of probabilities
#
# ### Test of parameter of two binomial distributions
#
# - Tests if the probability matches: $H_0: \pi_{1} - \pi_{2} = 0$
# - requirements:
# - sufficient selection size: $n_i>\frac{9}{p_i(1-p_i)}$
# - independence of selections
# +
# we will produce suitable data
pi1 = 0.4
pi2 = 0.3
dp1 = runif(n = 100, min = 0, max = 1) < pi1
dp2 = runif(n = 130, min = 0, max = 1) < pi2
x1 = sum(dp1)
n1 = length(dp1)
x2 = sum(dp2)
n2 = length(dp2)
x1
n1
x2
n2
# +
# H0: pi1 - pi2=0
# HA: pi1 - pi2!=0
prop.test(x = c(x1, x2), n = c(n1, n2), alternative="two.sided",
conf.level=0.95)
# +
# H0: pi1 - pi2=0
# HA: pi1 - pi2>0
prop.test(x = c(x1, x2), n = c(n1, n2), alternative="greater",
conf.level=0.95)
# +
# H0: pi1 - pi2=0
# HA: pi1 - pi2<0
prop.test(x = c(x1, x2), n = c(n1, n2), alternative="less",
conf.level=0.95)
# -
# # Examples
library(dplyr)
library(rstatix)
#
# ## Example 1.
#
# Data in the cholesterol2.xls file indicate the blood cholesterol level of men of two different age groups(20-30 years and 40-50 years). Verify at the significance level 0.05 the hypothesis that the cholesterol level in the blood of older men does not differ from the cholesterol level in the blood of younger men.
# Load data
chol = readxl::read_excel("data/testy_dvouvyberove.xlsx",
sheet = "cholesterol2",
skip = 1)
colnames(chol)=c("young","old")
head(chol)
# Convert to standard data format
chol.s = stack(chol)
chol.s = na.omit(chol.s)
colnames(chol.s) = c ("values","group")
head(chol.s)
# Exploratory analysis
boxplot(chol.s$values ~ chol.s$group)
# +
# Elimination of outliars:
chol.s$id = seq(1,length(chol.s$values))
outliars = chol.s %>% group_by(group) %>% identify_outliers(values)
outliars
# -
chol.s$values_cleared = ifelse(chol.s$id %in% outliars$id, NA, chol.s$values)
# +
boxplot(chol.s$values_cleared~chol.s$group)
# be careful in the data we have NA
# eg for length determination
# +
chol.s %>% group_by(group) %>%
summarise(count = sum(!is.na(values_cleared)),
mean = mean(values_cleared, na.rm = TRUE),
std = sd(values_cleared, na.rm = TRUE))
# rounding ->3 valid digits ->according to sd to thousands
# -
# **Difference of Mean/median test**
# +
# Verification of normality
chol.s %>% group_by(group) %>%
summarise(norm.pval = shapiro.test(values_cleared)$p.value)
# normality at significance 0.05 OK
# +
# Exactly by F-test
# H0: sigma.old=sigma.young
# Ha: sigma.old<>sigma.young
# I select the required data
young = chol.s$values_cleared[chol.s$group == "young"]
old = chol.s$values_cleared[chol.s$group == "old"]
var.test(x = young, y = old, ratio = 1, conf.level=0.95)
# at. significance 0.05 we reject the assumption of same variances
# The observed discrepancy between the variances is significant at the significance level of 0.05
# Mark as statistically significant.
# +
# Verification of same mean values(Aspin-Welch test)
# H0: mu.old - mu.young=0
# Ha: mu.old - mu.young!=0
t.test(x = old, y = young, mu = 0,
alternative = "two.sided", var.equal=FALSE, conf.level=0.95)
# in hl. significance 0.05 we reject H0->there is a stat. significant difference.
# +
# H0: mu.old=mu.young
# Ha: mu.old>mu.young
t.test(x = old, y = young, mu = 0, alternative = "greater",
var.equal = FALSE, conf.level = 0.95)
# -
# ## Example 2.
#
# The data in the depression.xls file represent the length of remission in days from a simple random selection of two different groups of patients(patients with endogenous depression and patients with neurotic depression). Verify that the observed difference in mean remission length in these two groups of patients is statistically significant.
#
#
# +
# Read data from xlsx file(using readxl package)
deprese = readxl::read_excel("data/testy_dvouvyberove.xlsx",
sheet = "deprese")
colnames(deprese)=c("endo","neuro")
head(deprese)
# +
# Conversion to standard data format
deprese.s = stack(deprese)
deprese.s = na.omit(deprese.s)
colnames(deprese.s) = c ("values","group")
head(deprese.s)
# +
# Exploratory analysis
boxplot(deprese.s$values~deprese.s$group)
# Data does not contain outliers
# +
library(dplyr)
deprese.s %>% group_by(group) %>%
summarise(count = length(values),
mean = mean(values),
std = sd(values))
# rounding ->3 valid digits ->according to sd
# -
# **Diference of Mean/median test**
#
#
# +
# Normality verification
# We assume the assumption of normality by the Shapir - Wilk test.
deprese.s %>% group_by(group) %>%
summarise(norm.pval = shapiro.test(values)$p.value)
# at significance 0.05, we reject the assumption of normality
# +
# at least as a guide, we check the similarity of the distribution
# we choose data for easier processing
neuro = deprese.s$values[deprese.s$group == "neuro"]
endo = deprese.s$values[deprese.s$group == "endo"]
par(mfrow = c(1,2))
hist(neuro)
hist(endo)
# +
# Difference of median (Mann-Whitney test)
# According to the histograms, we assume that the data have the same type of distribution.
# H0: med.neuro=med.endo(med.neuro - med.endo=0)
# Ha: med.neuro!=Med.endo(med.neuro - med.endo!=0)
wilcox.test(x = neuro, y = endo, mu = 0, alternative = "two.sided",
conf.level=0.95, conf.int = TRUE)
# at significance 0.05 we reject H0->there is a stat. significant difference
# +
# H0: med.neuro=med.endo(med.neuro - med.endo=0)
# Ha: med.neuro>med.endo(med.neuro - med.endo>0)
wilcox.test(x = neuro, y = endo, mu = 0, alternative = "greater",
conf.level=0.95, conf.int = TRUE)
# -
# ## Example 3.
#
# We monitor urine osmolality at the patient station at 08:00 and 11:00 for 16 men. Based on the results in the osmolality.xls file, verify that the osmolality has increased statistically significantly.
#
#
# Load data
osmolalita = readxl::read_excel("data/testy_dvouvyberove.xlsx",
sheet = "osmolalita", skip = 1)
osmolalita = osmolalita[,c(2,3)]
colnames(osmolalita)=c("o8","o11")
head(osmolalita)
# +
# Calculation of osmolality increase
osmolalita$increase = osmolalita$o11 - osmolalita$o8
# Exploratory analysis
par(mfrow = c(1,1))
boxplot(osmolalita$increase)
# Data contains outliars
# +
# Elimination of outliars:
osmolalita$id = seq(1,length(osmolalita$increase))
outliars = osmolalita %>% identify_outliers(increase)
outliars
osmolalita$increase_cleared = ifelse(osmolalita$id %in% outliars$id, NA, osmolalita$increase)
boxplot(osmolalita$increase_cleared)
# +
# Exploratory analysis for data without outliars
library(dplyr)
osmolalita %>% summarise(count = sum(!is.na(increase_cleared)),
mean = mean(increase_cleared, na.rm = TRUE),
std = sd(increase_cleared, na.rm = TRUE))
# rounding ->2 valid digits ->according to sd per unit
# -
# Verification of normality
# The presumption of normality is verified by the Shapir - Wilk test.
shapiro.test(osmolalita$increase_cleared)
# +
# Paired t-test
# H0: mu.increase=0 mm
# Ha: mu.increase>0 mm
t.test(osmolalita$increase_cleared, mu = 0, alternative = "greater")
# -
# ## Example 4.
#
# Semiconductor components of two manufacturers - MM and PP - were tested. MM claims that its products have a lower percentage of defective pieces. To verify this claim, 200 components were randomly selected from MM production, of which 14 were defective. A similar experiment was performed at PP with the result of 10 defective out of 100 randomly selected components.
#
# ### a)
#
# Test MM's claim with a clean significance test.
#
#
# +
x.MM = 14
n.MM = 200
p.MM = x.MM/n.MM
p.MM
x.PP = 10
n.PP = 100
p.PP = x.PP/n.PP
p.PP
# -
# Verification of assumptions
9/(p.MM*(1-p.MM))
9/(p.PP*(1-p.PP))
# +
# Pearson's X2 test
# H0: pi.PP=pi.MM
# Ha: pi.PP>pi.MM
prop.test(x = c(x.PP,x.MM),n = c(n.PP,n.MM), alternative = "greater",
conf.level = 0.95)
# at significance 0.05 we do not reject H0 - ie assumption.
# identical error rates. Therefore, it cannot be said that MM has better production.
# +
# Pearson's X2 test
# H0: pi.PP=pi.MM
# Ha: pi.PP!=Pi.MM
prop.test(x = c(x.PP,x.MM),n = c(n.PP,n.MM), alternative = "two.sided",
conf.level = 0.95)
# -
# ### b)
#
# Test MM's statement using an interval estimate of a significance level of 0.05.
#
#
# Based on 95% Clopper - Pearson right - hand interval estimation
# -0.036; 1,000) the observed difference in production quality can be described as
# not statistically significant. We can reach the same conclusions on the basis of
# Pearson's right-hand test
prop.test(x = c(x.PP,x.MM),n = c(n.PP,n.MM), alternative = "two.sided",
conf.level = 0.95)
| Exercise 11/T13_hypothesis_testing2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import glob
import pandas as pd
# +
#Mac
#path ='/Users/cseveriano/spatio-temporal-forecasting/data/raw/SINGA-Nobre/716'
#Windows
path ='C:\\Users\\cseve\\Google Drive\\Doutorado\\Codes\\spatio-temporal-forecasting\\data\\raw\\SINGA-Nobre\\716'
allFiles = glob.iglob(path + "/**/*.txt", recursive=True)
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
print("Reading: ",file_)
df = pd.read_csv(file_,index_col="Tm",parse_dates=['Tm'], header=0, sep="\t")
if frame.columns is None :
frame.columns = df.columns
list_.append(df)
frame = pd.concat(list_)
# -
def load_data(path, resampling=None):
## some resampling options: 'H' - hourly, '15min' - 15 minutes, 'M' - montlhy
## more options at:
## http://benalexkeen.com/resampling-time-series-data-with-pandas/
allFiles = glob.iglob(path + "/**/*.txt", recursive=True)
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
#print("Reading: ",file_)
df = pd.read_csv(file_,index_col="Tm",parse_dates=['Tm'], header=0, sep="\t")
if frame.columns is None :
frame.columns = df.columns
list_.append(df)
frame = pd.concat(list_)
if resampling is not None:
frame = frame.resample(resampling).mean()
return frame
# +
nov_716 = load_data('/Users/cseveriano/spatio-temporal-forecasting/data/raw/SINGA-Nobre/716 2017-11', '15min')
# -
type(nov_716.AvgGsi00)
# +
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.mlab as mlab
Fs = 96; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(0,30,Ts) # time vector
y = nov_716.AvgGsi00
plt.subplot(211)
plt.plot(t, y)
plt.subplot(212)
plt.psd(y, 96, Fs)
plt.show()
# -
| notebooks/Basic/SINGA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Movie Recommender Sytem
# Following notebook demonstrates following implementations in action which are build using Movie Lens Data set(26+ million records). The complete implementation of Python classes being used here can be found in src section of this project.
# - Popularity based recommendation
# - Item similarity based recommendation
# - User similarity based recommendation
import sys
sys.path.insert(0,'/Users/skumar/recommendation_system/src')
# %matplotlib inline
from models import recommenders
from metrics import metrics
import pandas as pd
from sklearn.cross_validation import train_test_split
import numpy as np
import time
from sklearn.externals import joblib
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.metrics.pairwise import linear_kernel, cosine_similarity
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import wordnet
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from nltk.corpus import stopwords
import warnings; warnings.simplefilter('ignore')
from nltk.tokenize import RegexpTokenizer
# # Load MovieLens movie data
rating_df = pd.read_csv('../data/movielens/ratings.csv', sep=',',header=0)
movies_df = pd.read_csv('../data/movilelens/movies.csv',sep=',', header=0)
rating_df.columns=['user_id', 'movie_id', 'rating','timestamp']
movies_df.columns=['movie_id', 'title', 'genres']
movies_df.head()
rating_df.head()
combined_df=pd.merge(movies_df, rating_df, on='movie_id')
combined_df.head()
combined_df.shape
# ## Length of the dataset
# ## Showing the most popular movies in the dataset
movie_grouped = combined_df.groupby(['movie_id']).agg({'user_id': 'count'}).reset_index()
grouped_sum = movie_grouped['user_id'].sum()
movie_grouped['percentage'] = movie_grouped['user_id'].div(grouped_sum)*100
movie_grouped.sort_values(['user_id', 'movie_id'], ascending = [0,1]).head()
# ## Count number of unique users in the dataset
users = combined_df['user_id'].unique()
len(users)
combined_df=combined_df.head(100000)
# # Create a movie recommender
train_data, test_data = train_test_split(combined_df, test_size = 0.20, random_state=0)
print(train_data.head(5))
# ## Popularity-based recommendations
# ### Create an instance of popularity based recommender class
pm = recommenders.PopularityRecommender()
pm.create(train_data, 'user_id', 'movie_id')
# ### Use the popularity model to make some predictions
user_id = users[0]
pm.recommend(user_id)
user_id = users[3]
# ## Build a movie recommender with personalization
#
# We now create an item similarity based collaborative filtering model that allows us to make personalized recommendations to each user.
# ## Class for an item similarity based personalized recommender system
# +
is_model = recommenders.ItemSimilarityRecommender()
is_model.create(train_data, 'user_id', 'movie_id')
# -
# ### Use the personalized model to make movie recommendations
# +
#Print the movies for the user in training data
user_id = users[0]
user_items = is_model.get_user_items(user_id)
#
print("------------------------------------------------------------------------------------")
print("Training data movies for the user user_id: %s:" % user_id)
print("------------------------------------------------------------------------------------")
for user_item in user_items:
print(user_item)
print("----------------------------------------------------------------------")
print("Recommendation process going on:")
print("----------------------------------------------------------------------")
#Recommend movies for the user using personalized model
is_model.recommend(user_id)
# -
# ### We can also apply the model to find similar movies to any movie in the dataset
is_model.get_similar_items([16240])
# # Quantitative comparison between the models
#
# We now formally compare the popularity and the personalized models using precision-recall curves.
# ## Precision recall to calculate quality of recommendations
train_data.shape
train_data.head()
# ## Code to plot precision recall curve
# +
import pylab as pl
#Method to generate precision and recall curve
def plot_precision_recall(m1_precision_list, m1_recall_list, m1_label, m2_precision_list, m2_recall_list, m2_label):
pl.clf()
pl.plot(m1_recall_list, m1_precision_list, label=m1_label)
pl.plot(m2_recall_list, m2_precision_list, label=m2_label)
pl.xlabel('Recall')
pl.ylabel('Precision')
pl.ylim([0.0, 0.20])
pl.xlim([0.0, 0.20])
pl.title('Precision-Recall curve')
#pl.legend(loc="upper right")
pl.legend(loc=9, bbox_to_anchor=(0.5, -0.2))
pl.show()
# -
# The curve shows that the personalized model provides much better performance over the popularity model.
| notebooks/1.0-sk-popularity-item-similarity-recommendation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Base Variables
# +
# average / weighted_average / bitwise_or
ENSEMBLE_METHOD = "network"
VIABLE_COUNTABILITY = 0
AVERAGE_ACCEPTABILITY = 0
# Minimum value of credibility per mask
CREDIBILITY_THRESHOLD = 0.5
# Minimum IoU in order to group instances together
DEVIATION_THRESHOLD = 0.5
# Minimum IoU in order to compare instances while evaluating
DEVIATION_THRESHOLD_EVAL = 0.5
#For Weighted Average
AVERAGE_ACCEPTABILITY_2 = 0.5
NETWORK_CHECKPOINTS = "work_dirs/unet2_no_img_ensemble_reduced_/SGD_lrelu_BCELoss/"
# + [markdown] id="5278963b-b35e-4da0-a2d1-34dab1972f80"
# # Segmentation Model Dictionary
# + id="7391a2e3-cc0d-483a-9c93-ebe94d64b483"
import os
model_dict = []
model_dict.append(('hybrid_task_cascade_mask_rcnn_X-101-64x4d-FPN',('configs/htc/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco.py',
'checkpoints/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco_20200312-946fd751.pth',
'https://download.openmmlab.com/mmdetection/v2.0/htc/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco_20200312-946fd751.pth')))
model_dict.append(('detectors_htc_r101_20e_coco',('configs/detectors/detectors_htc_r101_20e_coco.py',
'checkpoints/detectors_htc_r101_20e_coco_20210419_203638-348d533b.pth',
'https://download.openmmlab.com/mmdetection/v2.0/detectors/detectors_htc_r101_20e_coco/detectors_htc_r101_20e_coco_20210419_203638-348d533b.pth')))
model_dict.append(('cascade_mask_rcnn_X-101-64x4d-FPN',('configs/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco.py',
'checkpoints/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco_20210719_210311-d3e64ba0.pth',
'https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco_20210719_210311-d3e64ba0.pth')))
model_dict.append(('cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco',('configs/dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py',
'checkpoints/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco-e75f90c8.pth',
'https://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco-e75f90c8.pth')))
model_dict.append(('gcnet_X-101-FPN_DCN_Cascade_Mask_GC(c3-c5,r4)',('configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco.py',
'checkpoints/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco_20210615_161851-720338ec.pth',
'https://download.openmmlab.com/mmdetection/v2.0/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco_20210615_161851-720338ec.pth')))
key_list = [model[0] for model in model_dict]
test_config = 'configs/common/mstrain-poly_3x_coco_instance.py'
#test_config = 'configs/common/mstrain-poly_3x_reduced_coco_instance.py'
#test_config = 'configs/_base_/datasets/cityscapes_instance.py'
dataset_name = os.path.splitext(test_config)[0].split('/')[-1]
# + [markdown] id="Em6zu5R2uuJP"
# # Auxiliary Functions
# + id="Lo5mI3mnutsz"
import numpy as np
import torch
def IoU(boxA, boxB):
xA = max(boxA[0], boxB[0])
yA = max(boxA[1], boxB[1])
xB = min(boxA[2], boxB[2])
yB = min(boxA[3], boxB[3])
interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1)
boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1)
boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1)
iou = interArea / float(boxAArea + boxBArea - interArea)
return iou
def IoU_Mask(maskA, maskB):
intersection = np.logical_and(maskA, maskB).astype(np.uint8)
union = np.logical_or(maskA, maskB).astype(np.uint8)
iou = np.sum(intersection)/np.sum(union)
return iou
# -
# # Ensemble Network
# +
import torch.optim as optim
from network_definitions.u_net2 import UNet2
import math
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import init
import torch
BATCH_SIZE = 1
N_CHANNELS = 8
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net = UNet2(N_CHANNELS,1).float().to(device)
criterion = nn.BCELoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
checkpoint = torch.load(NETWORK_CHECKPOINTS+"epoch_2.pt")
net.load_state_dict(checkpoint['model_state_dict'])
"""optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']"""
net.eval()
# +
from torchinfo import summary
summary(net, (1,5,572,572))
# +
from skimage.transform import resize
from torchvision import transforms, utils
class Resize(object):
def __init__(self, size):
self.size = size
def __call__(self,sample):
return resize(sample,(*self.size,N_CHANNELS))
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
# torch image: C x H x W
sample = sample.transpose((2, 0, 1))
return torch.from_numpy(sample)
# + [markdown] id="8gLkLqvnvUoM"
# # Ensemble Functions
# + id="a309dfc8-8e8f-4399-8e02-080a64a9b44d"
from mmcv import Config
from mmdet.datasets import build_dataset, build_dataloader
from mmcv.parallel import MMDataParallel
from mmdet.apis import single_gpu_test
from mmdet.models import build_detector
from mmcv.runner import load_checkpoint
from mmdet.apis import inference_detector, init_detector, show_result_pyplot
from typing import List, Tuple, Union, Dict
from torch import nn
from os import path
from urllib import request
from tqdm import tqdm
from PIL import Image
import matplotlib.pyplot as plt
from skimage.transform import resize
from torchvision import transforms, utils
import cv2
import mmcv
import os.path as osp
import pycocotools.mask as mask_util
COCO_CLASSES = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant',
'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe',
'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat',
'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot',
'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop',
'mouse', 'remote', 'keyboard', 'cell phone', 'microwave',
'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock',
'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush')
CITYSCAPES_CLASSES = ('person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle',
'bicycle')
WORK_DIR = "work_dirs/ensemble_results/"
def get_dataset():
test_cfg = Config.fromfile(test_config)
# in case the test dataset is concatenated
samples_per_gpu = 1
if isinstance(test_cfg.data.test, dict):
test_cfg.data.test.test_mode = True
samples_per_gpu = test_cfg.data.test.pop('samples_per_gpu', 1)
if samples_per_gpu > 1:
# Replace 'ImageToTensor' to 'DefaultFormatBundle'
test_cfg.data.test.pipeline = replace_ImageToTensor(
test_cfg.data.test.pipeline)
elif isinstance(test_cfg.data.test, list):
for ds_cfg in test_cfg.data.test:
ds_cfg.test_mode = True
samples_per_gpu = max(
[ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in test_cfg.data.test])
if samples_per_gpu > 1:
for ds_cfg in test_cfg.data.test:
ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)
# init distributed env first, since logger depends on the dist info.
distributed = False
#rank, _ = get_dist_info()
# allows not to create
#mmcv.mkdir_or_exist(osp.abspath(args.work_dir))
#timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
#json_file = osp.join(args.work_dir, f'eval_{timestamp}.json')
# build the dataloader
dataset = build_dataset(test_cfg.data.test)
data_loader = build_dataloader(
dataset,
samples_per_gpu=samples_per_gpu,
workers_per_gpu=test_cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
return dataset, data_loader
def inference_on_dataset(model_info):
config, checkpoint = model_info
cfg = Config.fromfile(config)
if cfg.get('custom_imports', None):
from mmcv.utils import import_modules_from_strings
import_modules_from_strings(**cfg['custom_imports'])
# set cudnn_benchmark
if cfg.get('cudnn_benchmark', False):
torch.backends.cudnn.benchmark = True
cfg.model.pretrained = None
if cfg.model.get('neck'):
if isinstance(cfg.model.neck, list):
for neck_cfg in cfg.model.neck:
if neck_cfg.get('rfp_backbone'):
if neck_cfg.rfp_backbone.get('pretrained'):
neck_cfg.rfp_backbone.pretrained = None
elif cfg.model.neck.get('rfp_backbone'):
if cfg.model.neck.rfp_backbone.get('pretrained'):
cfg.model.neck.rfp_backbone.pretrained = None
test_cfg = Config.fromfile(test_config)
# in case the test dataset is concatenated
samples_per_gpu = 1
if isinstance(cfg.data.test, dict):
test_cfg.data.test.test_mode = True
samples_per_gpu = test_cfg.data.test.pop('samples_per_gpu', 1)
if samples_per_gpu > 1:
# Replace 'ImageToTensor' to 'DefaultFormatBundle'
test_cfg.data.test.pipeline = replace_ImageToTensor(
test_cfg.data.test.pipeline)
elif isinstance(test_cfg.data.test, list):
for ds_cfg in test_cfg.data.test:
ds_cfg.test_mode = True
samples_per_gpu = max(
[ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in test_cfg.data.test])
if samples_per_gpu > 1:
for ds_cfg in test_cfg.data.test:
ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)
# init distributed env first, since logger depends on the dist info.
distributed = False
#rank, _ = get_dist_info()
# allows not to create
#mmcv.mkdir_or_exist(osp.abspath(args.work_dir))
#timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
#json_file = osp.join(args.work_dir, f'eval_{timestamp}.json')
# build the dataloader
dataset = build_dataset(test_cfg.data.test)
data_loader = build_dataloader(
dataset,
samples_per_gpu=samples_per_gpu,
workers_per_gpu=test_cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
# build the model and load checkpoint
cfg.model.train_cfg = None
model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
fp16_cfg = cfg.get('fp16', None)
if fp16_cfg is not None:
wrap_fp16_model(model)
checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
#if args.fuse_conv_bn:
#model = fuse_conv_bn(model)
# old versions did not save class info in checkpoints, this walkaround is
# for backward compatibility
if 'CLASSES' in checkpoint.get('meta', {}):
model.CLASSES = checkpoint['meta']['CLASSES']
else:
model.CLASSES = dataset.CLASSES
#classes = model.CLASSES.copy()
classes = list(enumerate(model.CLASSES)).copy()
model = MMDataParallel(model, device_ids=[0])
outputs = single_gpu_test(model, data_loader, None, None, None)
print(len(dataset))
return outputs, classes, len(dataset)
def gather_results_from_model(model_name: str, model_info: Tuple[str,str], score_thr:float, person_only:bool , result_type = 'bbox'):
if not osp.exists(WORK_DIR+model_name+"_"+dataset_name+".pkl"):
mmcv.mkdir_or_exist(osp.abspath(WORK_DIR))
results, original_classes, dataset_size = inference_on_dataset(model_info)
classes = original_classes
mmcv.dump(results, WORK_DIR+model_name+"_"+dataset_name+".pkl")
else:
config,checkpoint = model_info
cfg = Config.fromfile(config)
model = build_detector(cfg.model)
checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')
if 'CLASSES' in checkpoint.get('meta', {}):
model.CLASSES = checkpoint['meta']['CLASSES']
else:
model.CLASSES = dataset.CLASSES
original_classes = list(enumerate(model.CLASSES)).copy()
classes = original_classes
results = mmcv.load(WORK_DIR+model_name+"_"+dataset_name+".pkl")
dataset_size = len(results)
if person_only:
if len(classes) == len(COCO_CLASSES):
classes = [(0,'person')]
elif len(classes) == len(CITYSCAPES_CLASSES):
classes = [(0,'person'),(1,'rider')]
return results,classes,dataset_size
def gather_results(model_dict: Dict[str,Tuple[str,str,str]], score_thr: float, person_only: bool, result_type='bbox'):
#model_dict = model_dict.items()
ensemble_results = {}
dataset_compatible = -1
label_type = []
for i, (name, (config,checkpoint,download_link)) in enumerate(model_dict):
if not path.exists(checkpoint):
print("Downloading",name)
request.urlretrieve(download_link,checkpoint)
print("Finished downloading",name)
print("Loading inference results from model:",name)
ensemble_results[i],classes,dataset_size = gather_results_from_model(name, (config,checkpoint), score_thr, person_only, result_type)
label_type.append(len(classes))
if dataset_compatible < 0 or dataset_compatible == dataset_size:
dataset_compatible = dataset_size
else:
raise(Exception("Dataset sizes are not compatible"))
return ensemble_results,classes,dataset_compatible
def group_instances(dataset,model_dict,ensemble_results, labels: List[str], dataset_size, score_thr, threshold, ensemble_method):
#ensemble_results[model][image][bbox or segm][label][instance]
final_results = []
n_models = len(ensemble_results)
count = 0
#Iterate over all the images
for img in tqdm(range(0,len(dataset))):
bbox_group = []
segm_group = []
if ensemble_method == "network":
filename = dataset[img]['img_metas'][0].data['filename']
ori_img_size = dataset[img]['img_metas'][0].data['ori_shape']
transform = transforms.Compose([Resize((572,572)),ToTensor()])
image = Image.open(filename)
img_array = np.asarray(image)/255
#Iterate over all the labels
for (label_nr,label) in labels:
bbox_results = []
segm_results = []
#Create a matrix of already used instances
used_instances = []
for cur_model in range(0,len(ensemble_results)):
used_instances.insert(cur_model,[False]*len(ensemble_results[cur_model][img][0][label_nr]))
#Iterate over all the models for a certain label and a certain image
for cur_model in range(0,len(ensemble_results)):
#Iterate over the current model's results on a certain label on a certain image
for cur_instance in range(0,len(ensemble_results[cur_model][img][0][label_nr])):
if not used_instances[cur_model][cur_instance] and ensemble_results[cur_model][img][0][label_nr][cur_instance][4] >= CREDIBILITY_THRESHOLD:
#if not used_instances[cur_model][cur_instance] and ensemble_results[cur_model][img][0][label_nr][cur_instance][4] >= model_dict[cur_model][1][3]:
used_instances[cur_model][cur_instance] = True
cur_instance_group = [None for w in range(0,len(ensemble_results))]
cur_instance_group[cur_model] = (ensemble_results[cur_model][img][0][label_nr][cur_instance],
ensemble_results[cur_model][img][1][label_nr][cur_instance])
#Iterate over all the other models
for comp_model in range(cur_model+1,len(ensemble_results)):
deviations = []
#Iterate over each of the other model's results
for comp_instance in range(0,len(ensemble_results[comp_model][img][0][label_nr])):
if ensemble_results[comp_model][img][0][label_nr][comp_instance][4] >= CREDIBILITY_THRESHOLD:
if not used_instances[comp_model][comp_instance]:
#cur_iou = IoU(ensemble_results[cur_model][img][0][label_nr][cur_instance],ensemble_results[comp_model][img][0][label_nr][comp_instance])
boxA = ensemble_results[cur_model][img][0][label_nr][cur_instance]
boxB = ensemble_results[comp_model][img][0][label_nr][comp_instance]
xA = int(round(min(boxA[0], boxB[0])))
yA = int(round(min(boxA[1], boxB[1])))
xB = int(round(max(boxA[2], boxB[2])))
yB = int(round(max(boxA[3], boxB[3])))
cur_iou = IoU_Mask(mask_util.decode(ensemble_results[cur_model][img][1][label_nr][cur_instance])[yA:yB,xA:xB],
mask_util.decode(ensemble_results[comp_model][img][1][label_nr][comp_instance])[yA:yB,xA:xB])
else:
cur_iou = 0.0
deviations.append(cur_iou)
#Check if the max iou is within the threshold and add the new instance to the group
if len(deviations) > 0:
pos = max(range(len(deviations)), key=deviations.__getitem__)
if deviations[pos] >= threshold:
#Guarantee this instance isn't used again
used_instances[comp_model][pos] = True
cur_instance_group[comp_model] = (ensemble_results[comp_model][img][0][label_nr][pos],
ensemble_results[comp_model][img][1][label_nr][pos])
count = 0
for instance_i in cur_instance_group:
if instance_i:
count += 1
# Assuming an instance group is viable if most of the networks identified it
if (count >= (n_models/2) + VIABLE_COUNTABILITY and (not (ensemble_method == "bitwise_and")) and (not (ensemble_method == "bitwise_or"))) or \
(count == n_models and ensemble_method == "bitwise_and") or \
(ensemble_method == "bitwise_or"):
bbox = np.array([0.0]*5)
for model_result in range(0,len(cur_instance_group)):
if not cur_instance_group[model_result] is None:
bbox = np.add(bbox,cur_instance_group[model_result][0])
bbox = (bbox/count)
confidence = bbox[4]
bbox = bbox.astype(int)
bbox[0:3] = np.around(bbox[0:3])
bbox_y = (bbox[3]-bbox[1]).astype(int)
bbox_x = (bbox[2]-bbox[0]).astype(int)
if ensemble_method == "network":
return_group = []
for x in range(len(cur_instance_group)):
if cur_instance_group[x] is None:
return_group.append(np.zeros(ori_img_size,dtype=np.uint8))
else:
return_group.append(mask_util.decode(cur_instance_group[x][1]))
pred_stack = np.dstack(return_group)
#fig, ax = plt.subplots(nrows=1, ncols=6, figsize=(30,15))
#ax=ax.flat
#ax[0].set_title("Input Image") # set title
#ax[0].imshow(img_array)
#for i in range(0,5):
#ax[i].set_title("Input "+str(i+1)) # set title
#ax[i].imshow(pred_stack[:,:,i],cmap='gray',vmin=0,vmax=1)
#pred_stack = np.dstack((img_array,pred_stack))
#network_input = transform(np.dstack((img_array,pred_stack)))[None,:].float().to(device)
network_input = transform(pred_stack)[np.newaxis,:]
network_input = network_input.float().to(device)
with torch.no_grad():
mask = net(network_input)
mask = mask.cpu().detach()
#mask = torch.argmax(mask,1)
img_size = (ori_img_size[1],ori_img_size[0])
mask = mask.numpy()[0].transpose((1,2,0))
mask = np.around(mask)
#ax[5].set_title("Output") # set title
#ax[5].imshow(mask,cmap='gray')
#mask = np.where(mask > 0.4, 1, 0)
mask = cv2.resize(mask, dsize=img_size, interpolation=cv2.INTER_NEAREST)
#count+=1
fig.tight_layout()
plt.show()
segmentation = mask.astype("uint8")
else:
mask = np.zeros((bbox_y,bbox_x),dtype=int)
img_size = (0,0)
if ensemble_method == "average":
for model_result in range(0,len(cur_instance_group)):
if not cur_instance_group[model_result] is None:
decoded_mask = mask_util.decode(cur_instance_group[model_result][1])
mask = mask+decoded_mask[bbox[1]:bbox[1]+bbox_y,bbox[0]:bbox[0]+bbox_x].astype(int)
img_size = decoded_mask.shape
acceptability = max(1,count/2 + AVERAGE_ACCEPTABILITY)
mask = mask >= acceptability
elif ensemble_method == "weighted_average":
total_confidence = 0.0
for model_result in range(0,len(cur_instance_group)):
if not cur_instance_group[model_result] is None:
decoded_mask = mask_util.decode(cur_instance_group[model_result][1])
mask = mask+(decoded_mask[bbox[1]:bbox[1]+bbox_y,bbox[0]:bbox[0]+bbox_x].astype(int) * confidence)
total_confidence += confidence
img_size = decoded_mask.shape
mask = mask >= AVERAGE_ACCEPTABILITY_2 * total_confidence
elif ensemble_method == "bitwise_or":
for model_result in range(0,len(cur_instance_group)):
if not cur_instance_group[model_result] is None:
decoded_mask = mask_util.decode(cur_instance_group[model_result][1])
mask = mask+decoded_mask[bbox[1]:bbox[1]+bbox_y,bbox[0]:bbox[0]+bbox_x].astype(int)
img_size = decoded_mask.shape
mask = mask > 0.0
elif ensemble_method == "bitwise_and":
for model_result in range(0,len(cur_instance_group)):
decoded_mask = mask_util.decode(cur_instance_group[model_result][1])
mask = mask+decoded_mask[bbox[1]:bbox[1]+bbox_y,bbox[0]:bbox[0]+bbox_x].astype(int)
img_size = decoded_mask.shape
mask = mask == float(n_models)
segmentation = np.zeros(img_size).astype(bool)
segmentation[bbox[1]:bbox[1]+bbox_y,bbox[0]:bbox[0]+bbox_x] = mask
bbox = bbox.astype(float)
bbox[4] = confidence
bbox_results.append(np.array(bbox))
segm_results.append(mask_util.encode(np.asfortranarray(segmentation)))
#segm_results.append(np.array(segmentation))
if not bbox_results is None:
np.append(bbox_results,np.array([]))
bbox_group.append(np.array(bbox_results).reshape(-1,5))
segm_group.append(segm_results)
final_results.append((bbox_group,segm_group))
return final_results
def run_ensemble(dataset, model_dict: Dict[str,Tuple[str,str,str]], score_thr: float, person_only: bool, ensemble_method: str, result_type='segm'):
ensemble_results,classes,dataset_size = gather_results(model_dict,score_thr,person_only,result_type)
results = group_instances(dataset,model_dict,ensemble_results,classes,dataset_size,score_thr,DEVIATION_THRESHOLD,ensemble_method)
#Force garbage collection in order to release memory
return results
# + [markdown] id="hBw2l47Gve0i"
# # Run Ensemble
# + id="75e783ac-0bbf-4f81-b5b7-3dec679ec968" outputId="02550cf4-dc5b-4bc9-c048-0ccab5fa1b4d"
import glob
import warnings
import gc
import os
import pickle
warnings.filterwarnings('ignore')
dataset,dataloader = get_dataset()
results = run_ensemble(dataset,model_dict, CREDIBILITY_THRESHOLD,True,ENSEMBLE_METHOD,result_type='segm')
#results = get_dataset(model_dict, CREDIBILITY_THRESHOLD,True,ENSEMBLE_METHOD,result_type='segm')
warnings.filterwarnings('default')
# -
checkpoint['loss']
dataset,dataloader = get_dataset()
print(dataset[0])
# # Evaluation
# +
test_cfg = Config.fromfile(test_config)
# in case the test dataset is concatenated
samples_per_gpu = 1
if isinstance(test_cfg.data.test, dict):
test_cfg.data.test.test_mode = True
samples_per_gpu = test_cfg.data.test.pop('samples_per_gpu', 1)
if samples_per_gpu > 1:
# Replace 'ImageToTensor' to 'DefaultFormatBundle'
test_cfg.data.test.pipeline = replace_ImageToTensor(
test_cfg.data.test.pipeline)
elif isinstance(test_cfg.data.test, list):
for ds_cfg in test_cfg.data.test:
ds_cfg.test_mode = True
samples_per_gpu = max(
[ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in test_cfg.data.test])
if samples_per_gpu > 1:
for ds_cfg in test_cfg.data.test:
ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)
# init distributed env first, since logger depends on the dist info.
distributed = False
#rank, _ = get_dist_info()
# allows not to create
#mmcv.mkdir_or_exist(osp.abspath(args.work_dir))
#timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
#json_file = osp.join(args.work_dir, f'eval_{timestamp}.json')
# build the dataloader
dataset = build_dataset(test_cfg.data.test)
data_loader = build_dataloader(
dataset,
samples_per_gpu=samples_per_gpu,
workers_per_gpu=test_cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
#print(metric)
#metric_dict = dict(config=args.config, metric=metric)
#if args.work_dir is not None and rank == 0:
#mmcv.dump(metric_dict, json_file)
# -
print(dataset)
eval_kwargs = dict(metric=["bbox","segm"],classwise=True)
metric = dataset.evaluate(results, **eval_kwargs)
# +
bc_info = dataset.custom_evaluate(results,DEVIATION_THRESHOLD_EVAL)
print("TP","{:.5f}".format(np.average([j for sub in list(bc_info['tp'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['tp'].values()) for j in sub[1]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['tp'].values()) for j in sub[2]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['tp'].values()) for j in sub[3]])))
print("FP","{:.5f}".format(np.average([j for sub in list(bc_info['fp'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['fp'].values()) for j in sub[1]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['fp'].values()) for j in sub[2]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['fp'].values()) for j in sub[3]])))
print("TN","{:.5f}".format(np.average([j for sub in list(bc_info['tn'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['tn'].values()) for j in sub[1]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['tn'].values()) for j in sub[2]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['tn'].values()) for j in sub[3]])))
print("FN","{:.5f}".format(np.average([j for sub in list(bc_info['fn'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['fn'].values()) for j in sub[1]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['fn'].values()) for j in sub[2]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['fn'].values()) for j in sub[3]])))
print("OA","{:.5f}".format(np.average([j for sub in list(bc_info['oa'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['oa'].values()) for j in sub[1]])))
print("P","{:.5f}".format(np.average([j for sub in list(bc_info['p'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['p'].values()) for j in sub[1]])))
print("R","{:.5f}".format(np.average([j for sub in list(bc_info['r'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['r'].values()) for j in sub[1]])))
print("F1","{:.5f}".format(np.average([j for sub in list(bc_info['f1'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['f1'].values()) for j in sub[1]])))
print("IoU","{:.5f}".format(np.average([j for sub in list(bc_info['iou'].values()) for j in sub[0]])),
"{:.5f}".format(np.average([j for sub in list(bc_info['iou'].values()) for j in sub[1]])))
cg_list = list(bc_info['cg'].values())
ig_list = list(bc_info['ig'].values())
ng_list = list(bc_info['ng'].values())
if (not cg_list) or (not ig_list) or (not ng_list):
print("GC nan nan nan nan")
print("CG nan nan nan nan")
print("TCG nan nan nan nan")
print("NG nan nan nan nan")
print("TNG nan nan nan nan")
print("TIG nan nan nan nan")
print("UGR50 nan nan nan nan")
print("UGR75 nan nan nan nan")
print("UGR25 nan nan nan nan")
else:
cg = np.sum(cg_list,axis=0)
ig = np.sum(ig_list,axis=0)
ng = np.sum(ng_list,axis=0)
ideal_guesses = sum([cg[0],ng[0]])
cg_ig = sum([cg[0],ig[0]])
total_guesses = sum([cg[0],ig[0],ng[0]])
ideal_guesses_small = sum([cg[1],ng[1]])
cg_ig_small = sum([cg[1],ig[1]])
total_guesses_small = sum([cg[1],ig[1],ng[1]])
ideal_guesses_medium = sum([cg[2],ng[2]])
cg_ig_medium = sum([cg[2],ig[2]])
total_guesses_medium = sum([cg[2],ig[2],ng[2]])
ideal_guesses_large = sum([cg[3],ng[3]])
cg_ig_large = sum([cg[3],ig[3]])
total_guesses_large = sum([cg[3],ig[3],ng[3]])
print("GC","{:.5f}".format(cg[0]/cg_ig),
"{:.5f}".format(cg[1]/cg_ig_small),
"{:.5f}".format(cg[2]/cg_ig_medium),
"{:.5f}".format(cg[3]/cg_ig_large))
print("CG","{:.5f}".format(cg[0]/ideal_guesses),
"{:.5f}".format(cg[1]/ideal_guesses_small),
"{:.5f}".format(cg[2]/ideal_guesses_medium),
"{:.5f}".format(cg[3]/ideal_guesses_large))
print("TCG","{:.5f}".format(cg[0]/total_guesses),
"{:.5f}".format(cg[1]/total_guesses_small),
"{:.5f}".format(cg[2]/total_guesses_medium),
"{:.5f}".format(cg[3]/total_guesses_large))
print("NG","{:.5f}".format(ng[0]/ideal_guesses),
"{:.5f}".format(ng[1]/ideal_guesses_small),
"{:.5f}".format(ng[2]/ideal_guesses_medium),
"{:.5f}".format(ng[3]/ideal_guesses_large))
print("TNG","{:.5f}".format(ng[0]/total_guesses),
"{:.5f}".format(ng[1]/total_guesses_small),
"{:.5f}".format(ng[2]/total_guesses_medium),
"{:.5f}".format(ng[3]/total_guesses_large))
print("TIG","{:.5f}".format(ig[0]/total_guesses),
"{:.5f}".format(ig[1]/total_guesses_small),
"{:.5f}".format(ig[2]/total_guesses_medium),
"{:.5f}".format(ig[3]/total_guesses_large))
print("UGR50","{:.5f}".format(cg[0]/((0.50*ng[0])+(0.50*ig[0]))),
"{:.5f}".format(cg[1]/((0.50*ng[1])+(0.50*ig[1]))),
"{:.5f}".format(cg[2]/((0.50*ng[2])+(0.50*ig[2]))),
"{:.5f}".format(cg[3]/((0.50*ng[3])+(0.50*ig[3]))))
print("UGR75","{:.5f}".format(cg[0]/((0.75*ng[0])+(0.25*ig[0]))),
"{:.5f}".format(cg[1]/((0.75*ng[1])+(0.25*ig[1]))),
"{:.5f}".format(cg[2]/((0.75*ng[2])+(0.25*ig[2]))),
"{:.5f}".format(cg[3]/((0.75*ng[3])+(0.25*ig[3]))))
print("UGR25","{:.5f}".format(cg[0]/((0.25*ng[0])+(0.75*ig[0]))),
"{:.5f}".format(cg[1]/((0.25*ng[1])+(0.75*ig[1]))),
"{:.5f}".format(cg[2]/((0.25*ng[2])+(0.75*ig[2]))),
"{:.5f}".format(cg[3]/((0.25*ng[3])+(0.75*ig[3]))))
# -
# # Image Viewer
with open('work_dirs/ensemble_aux/coco_image_names.pickle', 'rb') as handle:
images_names = pickle.load(handle)
with open('work_dirs/ensemble_aux/coco_image_names2.pickle', 'rb') as handle:
images_names2 = pickle.load(handle)
GT = dataset.gt_return()
p = GT
print(len(p[139]))
print(GT[139][0]['segmentation'])
result_list = ["GT","results/1|2|3|4_e=average_c=0.2_v=-1_d=0.4_a=0.pkl","results/1|2_e=average_c=0.2_v=-1_d=0.4_a=-1.pkl"]
# +
from ipywidgets import Select,Layout,Output,VBox,HBox
from IPython.display import display
import os
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.patches as patches
config, checkpoint,_ = model_dict[0][1]
model = init_detector(config, checkpoint, device='cuda:0')
select = Select(
description="Image",
options=images_names,
value=images_names[0],
rows=15,
disabled=False,
)
select2 = Select(
description="Ensemble",
options=result_list,
value=result_list[0],
rows=15,
disabled=False,
)
output = Output(
layout=Layout(width='70%')
)
display(HBox([VBox([select,select2]),output]))
def on_element_clicked(b):
with output:
output.clear_output()
if select2.value != "GT":
with open(select2.value,"r"):
results = mmcv.load(select2.value)
img = images_names2[select.value]
seg_res = []
for i in range(0,len(results[img][1])):
seg_res2 = []
for j in range(0,len(results[img][1][i])):
mask = mask_util.decode(results[img][1][i][j]).astype(bool)
seg_res2.append(mask)
seg_res.append(seg_res2)
show_result_pyplot(model, select.value, (results[img][0],seg_res), score_thr=CREDIBILITY_THRESHOLD)
else:
results = GT
img = int(os.path.splitext(select.value)[0].split("/")[-1])
seg_res = []
bbox_res = []
image = Image.open(select.value)
fig, ax = plt.subplots()
fig.figsize=(20, 16)
fig.dpi=160
ax.imshow(image,aspect='auto')
for i in range(0,len(results[img])):
bbox_res = results[img][i]['bbox']
bbox = patches.Rectangle((bbox_res[0], bbox_res[1]), bbox_res[2], bbox_res[3], linewidth=1, edgecolor='r', facecolor='none')
ax.add_patch(bbox)
mask = mask_util.decode(results[img][i]['segmentation']).astype(int)
mask = np.ma.masked_values(mask, 0)
ax.imshow(mask, alpha=0.5, cmap="Reds", interpolation = 'none')
print(bbox_res)
plt.show()
#plt.imshow(img_2, alpha=0.5)
#show_result_pyplot(model, select.value, (bbox_res,seg_res), score_thr=CREDIBILITY_THRESHOLD)
select.observe(on_element_clicked)
select2.observe(on_element_clicked)
# + id="f456b6c6-a8a1-4fdb-a3e6-9af1604fed03" outputId="3e913b10-38de-4aee-99ae-1cc198fcb70a"
config, checkpoint,_,_ = model_dict[0][1]
model = init_detector(config, checkpoint, device='cuda:0')
for img in range(0,len(dataset)):
seg_res = []
for i in range(0,len(results[img][1])):
seg_res2 = []
for j in range(0,len(results[img][1][i])):
mask = mask_util.decode(results[img][1][i][j]).astype(bool)
seg_res2.append(mask)
seg_res.append(seg_res2)
print(dataset[img]['img_metas'][0].data['filename'])
show_result_pyplot(model, dataset[img]['img_metas'][0].data['filename'], (results[img][0],seg_res), score_thr=CREDIBILITY_THRESHOLD)
# -
# # Result Recorder
# +
# #%%capture cap --no-stderr
import gc
import re
import pickle
def ensemble_and_evaluate(model_dict,order):
results = run_ensemble(model_dict, CREDIBILITY_THRESHOLD,True,ENSEMBLE_METHOD,result_type='segm')
title = "results/"+('|'.join(str(e) for e in order))+"_e="+str(ENSEMBLE_METHOD)+"_c="+str(CREDIBILITY_THRESHOLD)+"_v="+str(VIABLE_COUNTABILITY)+"_d="+str(DEVIATION_THRESHOLD)
if ENSEMBLE_METHOD == 'average':
title = title + "_a="+str(AVERAGE_ACCEPTABILITY)
elif ENSEMBLE_METHOD == 'weighted_average':
title = title + "_a2="+str(AVERAGE_ACCEPTABILITY_2)
title = title + ".pkl"
with open(title, 'wb') as f:
pickle.dump(results, f)
test_cfg = Config.fromfile(test_config)
# in case the test dataset is concatenated
samples_per_gpu = 1
if isinstance(test_cfg.data.test, dict):
test_cfg.data.test.test_mode = True
samples_per_gpu = test_cfg.data.test.pop('samples_per_gpu', 1)
if samples_per_gpu > 1:
# Replace 'ImageToTensor' to 'DefaultFormatBundle'
test_cfg.data.test.pipeline = replace_ImageToTensor(
test_cfg.data.test.pipeline)
elif isinstance(test_cfg.data.test, list):
for ds_cfg in test_cfg.data.test:
ds_cfg.test_mode = True
samples_per_gpu = max(
[ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in test_cfg.data.test])
if samples_per_gpu > 1:
for ds_cfg in test_cfg.data.test:
ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)
# init distributed env first, since logger depends on the dist info.
distributed = False
#rank, _ = get_dist_info()
# allows not to create
#mmcv.mkdir_or_exist(osp.abspath(args.work_dir))
#timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
#json_file = osp.join(args.work_dir, f'eval_{timestamp}.json')
# build the dataloader
dataset = build_dataset(test_cfg.data.test)
data_loader = build_dataloader(
dataset,
samples_per_gpu=samples_per_gpu,
workers_per_gpu=test_cfg.data.workers_per_gpu,
dist=distributed,
shuffle=False)
eval_kwargs = dict(metric=["bbox","segm"],classwise=True)
metric = dataset.evaluate(results, **eval_kwargs)
bc_info = dataset.evaluateBC(results,0.35)
tps = np.sum(list(bc_info['tp'].values()),axis=0)
fps = np.sum(list(bc_info['fp'].values()),axis=0)
tns = np.sum(list(bc_info['tn'].values()),axis=0)
fns = np.sum(list(bc_info['fn'].values()),axis=0)
#Copiar por aqui
total_1 = sum([tps[0],fps[0],tns[0],fns[0]])
total_1_small = sum([tps[1],fps[1],tns[1],fns[1]])
total_1_medium = sum([tps[2],fps[2],tns[2],fns[2]])
total_1_large = sum([tps[3],fps[3],tns[3],fns[3]])
total_2 = sum([tps[4],fps[4],tns[4],fns[4]])
total_2_small = sum([tps[5],fps[5],tns[5],fns[5]])
total_2_medium = sum([tps[6],fps[6],tns[6],fns[6]])
total_2_large = sum([tps[7],fps[7],tns[7],fns[7]])
print("True Positives",tps[0],"{:.5f}".format(tps[0]/total_1),
tps[1],"{:.5f}".format(tps[1]/total_1_small),
tps[2],"{:.5f}".format(tps[2]/total_1_medium),
tps[3],"{:.5f}".format(tps[3]/total_1_large),
tps[4],"{:.5f}".format(tps[4]/total_2),
tps[5],"{:.5f}".format(tps[5]/total_2_small),
tps[6],"{:.5f}".format(tps[6]/total_2_medium),
tps[7],"{:.5f}".format(tps[7]/total_2_large))
print("False Positives",fps[0],"{:.5f}".format(fps[0]/total_1),
fps[1],"{:.5f}".format(fps[1]/total_1_small),
fps[2],"{:.5f}".format(fps[2]/total_1_medium),
fps[3],"{:.5f}".format(fps[3]/total_1_large),
fps[4],"{:.5f}".format(fps[4]/total_2),
fps[5],"{:.5f}".format(fps[5]/total_2_small),
fps[6],"{:.5f}".format(fps[6]/total_2_medium),
fps[7],"{:.5f}".format(fps[7]/total_2_large))
print("True Negatives",tns[0],"{:.5f}".format(tns[0]/total_1),
tns[1],"{:.5f}".format(tns[1]/total_1_small),
tns[2],"{:.5f}".format(tns[2]/total_1_medium),
tns[3],"{:.5f}".format(tns[3]/total_1_large),
tns[4],"{:.5f}".format(tns[4]/total_2),
tns[5],"{:.5f}".format(tns[5]/total_2_small),
tns[6],"{:.5f}".format(tns[6]/total_2_medium),
tns[7],"{:.5f}".format(tns[7]/total_2_large))
print("False Negatives",fns[0],"{:.5f}".format(fns[0]/total_1),
fns[1],"{:.5f}".format(fns[1]/total_1_small),
fns[2],"{:.5f}".format(fns[2]/total_1_medium),
fns[3],"{:.5f}".format(fns[3]/total_1_large),
fns[4],"{:.5f}".format(fns[4]/total_2),
fns[5],"{:.5f}".format(fns[5]/total_2_small),
fns[6],"{:.5f}".format(fns[6]/total_2_medium),
fns[7],"{:.5f}".format(fns[7]/total_2_large))
cg = list(bc_info['correct_guesses'].values())
ig = list(bc_info['incorrect_guesses'].values())
ng = list(bc_info['not_guessed'].values())
correct_guesses = np.sum(cg,axis=0)
incorrect_guesses = np.sum(ig,axis=0)
not_guessed = np.sum(ng,axis=0)
is_crowds = list(bc_info['is_crowd'].values())
crowd_cg = list(zip(cg,is_crowds))
crowd_cg,_ = zip(*[x for x in crowd_cg if x[1] is True])
crowd_ig = list(zip(ig,is_crowds))
crowd_ig,_ = zip(*[x for x in crowd_ig if x[1] is True])
crowd_ng = list(zip(ng,is_crowds))
crowd_ng,_ = zip(*[x for x in crowd_ng if x[1] is True])
crowd_correct_guesses = np.sum(crowd_cg,axis=0)
crowd_incorrect_guesses = np.sum(crowd_ig,axis=0)
crowd_not_guessed = np.sum(crowd_ng,axis=0)
total_guesses = sum([correct_guesses[0],incorrect_guesses[0],not_guessed[0]])
total_guesses_small = sum([correct_guesses[1],incorrect_guesses[1],not_guessed[1]])
total_guesses_medium = sum([correct_guesses[2],incorrect_guesses[2],not_guessed[2]])
total_guesses_large = sum([correct_guesses[3],incorrect_guesses[3],not_guessed[3]])
total_crowd_guesses = sum([crowd_correct_guesses[0],crowd_incorrect_guesses[0],crowd_not_guessed[0]])
total_crowd_guesses_small = sum([crowd_correct_guesses[1],crowd_incorrect_guesses[1],crowd_not_guessed[1]])
total_crowd_guesses_medium = sum([crowd_correct_guesses[2],crowd_incorrect_guesses[2],crowd_not_guessed[2]])
total_crowd_guesses_large = sum([crowd_correct_guesses[3],crowd_incorrect_guesses[3],crowd_not_guessed[3]])
print("Correct Guesses",correct_guesses[0],"{:.5f}".format(correct_guesses[0]/total_guesses),
correct_guesses[1],"{:.5f}".format(correct_guesses[1]/total_guesses_small),
correct_guesses[2],"{:.5f}".format(correct_guesses[2]/total_guesses_medium),
correct_guesses[3],"{:.5f}".format(correct_guesses[3]/total_guesses_large),
crowd_correct_guesses[0],"{:.5f}".format(crowd_correct_guesses[0]/total_crowd_guesses),
crowd_correct_guesses[1],"{:.5f}".format(crowd_correct_guesses[1]/total_crowd_guesses_small),
crowd_correct_guesses[2],"{:.5f}".format(crowd_correct_guesses[2]/total_crowd_guesses_medium),
crowd_correct_guesses[3],"{:.5f}".format(crowd_correct_guesses[3]/total_crowd_guesses_large))
print("Incorrect Guesses",incorrect_guesses[0],"{:.5f}".format(incorrect_guesses[0]/total_guesses),
incorrect_guesses[1],"{:.5f}".format(incorrect_guesses[1]/total_guesses_small),
incorrect_guesses[2],"{:.5f}".format(incorrect_guesses[2]/total_guesses_medium),
incorrect_guesses[3],"{:.5f}".format(incorrect_guesses[3]/total_guesses_large),
crowd_incorrect_guesses[0],"{:.5f}".format(crowd_incorrect_guesses[0]/total_crowd_guesses),
crowd_incorrect_guesses[1],"{:.5f}".format(crowd_incorrect_guesses[1]/total_crowd_guesses_small),
crowd_incorrect_guesses[2],"{:.5f}".format(crowd_incorrect_guesses[2]/total_crowd_guesses_medium),
crowd_incorrect_guesses[3],"{:.5f}".format(crowd_incorrect_guesses[3]/total_crowd_guesses_large))
print("Not Guessed",not_guessed[0],"{:.5f}".format(not_guessed[0]/total_guesses),
not_guessed[1],"{:.5f}".format(not_guessed[1]/total_guesses_small),
not_guessed[2],"{:.5f}".format(not_guessed[2]/total_guesses_medium),
not_guessed[3],"{:.5f}".format(not_guessed[3]/total_guesses_large),
crowd_not_guessed[0],"{:.5f}".format(crowd_not_guessed[0]/total_crowd_guesses),
crowd_not_guessed[1],"{:.5f}".format(crowd_not_guessed[1]/total_crowd_guesses_small),
crowd_not_guessed[2],"{:.5f}".format(crowd_not_guessed[2]/total_crowd_guesses_medium),
crowd_not_guessed[3],"{:.5f}".format(crowd_not_guessed[3]/total_crowd_guesses_large))
def ordering_recursion(models, missing_iterations, used_array, order_array):
if missing_iterations == 0:
ordered_model_dict = []
print("Model Order: ",end="")
for i in range(0,len(order_array)):
ordered_model_dict.append(models[order_array[i]])
print(ordered_model_dict[i][0],end=" -> ")
print("")
ensemble_and_evaluate(ordered_model_dict,order_array)
else:
for i in range(0, len(models)):
if not used_array[i]:
cur_used_array = used_array.copy()
cur_used_array[i] = True
cur_order_array = order_array.copy()
cur_order_array.append(i)
ordering_recursion(models,missing_iterations-1,cur_used_array,cur_order_array)
def non_ordered_recursion(models, next_model, missing_iterations, used_array, order_array):
if missing_iterations == 0:
ordered_model_dict = []
print("Model Order: ",end="")
for i in range(0,len(order_array)):
ordered_model_dict.append(models[order_array[i]])
print(ordered_model_dict[i][0],end=" -> ")
print("")
ensemble_and_evaluate(ordered_model_dict,order_array)
else:
for i in range(next_model, len(models)):
if not used_array[i]:
cur_used_array = used_array.copy()
cur_used_array[i] = True
cur_order_array = order_array.copy()
cur_order_array.append(i)
non_ordered_recursion(models,i,missing_iterations-1,cur_used_array,cur_order_array)
def ensemble_permutations(models):
used_array = [False] * len(models)
for i in range(1,len(models)+1):
non_ordered_recursion(models,0, i, used_array, [])
#ordering_recursion(models, i, used_array, [])
# -
perm_results = ensemble_permutations(model_dict)
| Inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# [View in Colaboratory](https://colab.research.google.com/github/todun/googlecolab/blob/master/Scala_on_Google_Colaboratory.ipynb)
# + [markdown] id="cTEL15Af3o-W" colab_type="text"
# 
# * work in progress.
# * not all sections work
# + [markdown] id="gf1_OxCD4565" colab_type="text"
# # Update & Upgrade Google colab
# The VM is a debian linux box and the ampersand can be used(if line is prefixed with bang operator) to update and upgrade the VM
# + id="sPTsrFUQ6feC" colab_type="code" colab={}
# !apt-get update && apt-get upgrade
# + [markdown] id="SrSqFXW069BS" colab_type="text"
# # Install Scala
# The scala package can be installed with apt-get
# + id="TCqjNql65ANY" colab_type="code" colab={}
# !apt-get install scala scala-library
# + [markdown] id="jMuAkUsp6FlS" colab_type="text"
# # Search for a package
# In debian linux distro, search for a package in google colab
# # # !apt-cache search <package-name>
# + id="VxKA56ed6R8m" colab_type="code" colab={}
# !apt-cache search apache-spark
# + [markdown] id="SbxP_YGT7s60" colab_type="text"
# # Install Protocol Buffers Compiler
# TensorFlow for Scala also requires protoc, the Protocol Buffers compiler, to be installed. You also have two options for dealing with this requirement:
# + id="zFSyy6xx7oI7" colab_type="code" colab={}
# !apt-get install protobuf-compiler
# + id="0pqRK_YG72Bb" colab_type="code" colab={}
# !scala -version
# + [markdown] id="9D6umsFNBiD2" colab_type="text"
# # [Install Sbt on linux debian ](https://stackoverflow.com/a/44252860)
# Replace sbt-0.13.15 above with the actual version of sbt you want.
#
# + id="mKu5hAP69iKp" colab_type="code" colab={}
# !apt --fix-broken install
# !curl -L -o sbt.deb http://dl.bintray.com/sbt/debian/sbt-0.13.15.deb
# !dpkg -i sbt.deb
# !apt-get update
# !apt-get install sbt
# # !sbt
# + [markdown] id="qyfrSxlqxqlU" colab_type="text"
# # [Setup anaconda](https://stackoverflow.com/a/50302183)
# Setup the anaconda python package manager in Google Colaboratory
#
# + id="XrDL6kY_VzVO" colab_type="code" colab={}
# !ls /usr/local/
# + id="Y9daFr9txsle" colab_type="code" colab={}
# !wget -c https://repo.continuum.io/archive/Anaconda3-5.1.0-Linux-x86_64.sh
# !chmod +x Anaconda3-5.1.0-Linux-x86_64.sh
# !bash ./Anaconda3-5.1.0-Linux-x86_64.sh -b -f -p /usr/local
# !conda install -y --prefix /usr/local -c conda-forge -c dlr-sc -c pythonocc -c oce pythonocc-core # !conda install -y --prefix /usr/local -c <<<your wish>>>>
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages/')
# !conda update -n base conda -y
# + [markdown] id="gt53KIO1y-V_" colab_type="text"
# # Install spylon-kernel
# https://github.com/Valassis-Digital-Media/spylon-kernel
#
#
# + id="UzHboV-Ey2i9" colab_type="code" colab={}
# !conda install -c conda-forge spylon-kernel -y
# + [markdown] id="saIDUtkp0TpT" colab_type="text"
# ### Using spylon-kernel as a ipython magic
#
# https://github.com/Valassis-Digital-Media/spylon-kernel
#
# https://github.com/Valassis-Digital-Media/spylon-kernel/blob/master/examples/basic_example.ipynb
# + id="h4trpycZ0W0T" colab_type="code" colab={}
from spylon_kernel import register_ipython_magics
register_ipython_magics()
# + id="Q93u6993gLdr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a013ed5b-3928-4c71-b0ae-064336c3b186"
# !$JAVA_HOME
# + id="kKeCutPf0asZ" colab_type="code" colab={} language="scala"
# val x = 8
# x
# + id="AfayDNUf1D7v" colab_type="code" colab={}
# %%init_spark
launcher.num_executors = 4
launcher.executor_cores = 2
launcher.driver_memory = '4g'
launcher.conf.set("spark.sql.catalogImplementation", "hive")
# + [markdown] id="duke-zzZVy4B" colab_type="text"
# # Install Jupyter Scala kernel
# * Aims to install [Jupyter Scala kernel](https://github.com/jupyter-scala/jupyter-scala) into the Google Colaboratory environment
#
# + id="4rcHwrLBAVjj" colab_type="code" colab={}
# !wget -qO- https://oss.sonatype.org/content/repositories/snapshots/sh/jove/jove-scala-cli_2.11/0.1.1-1-SNAPSHOT/jove-scala-cli_2.11-0.1.1-1-SNAPSHOT.tar.gz | tar xvz
# + id="786j-v8sX0CH" colab_type="code" colab={}
# !./jove-scala-cli-0.1.1-1-SNAPSHOT/bin/jove-scala
# + id="_oIbQANdYWI1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 170} outputId="f7f650e3-e90e-4446-9f2a-1dd241d30cbb"
# !jupyter kernelspec list
# + id="WIj9VUNyhyoR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="fd204b70-8f53-4513-a9f9-269045d974a0"
# !jupyter console --kernel scala
# + [markdown] id="TEoi-wHph428" colab_type="text"
# # Install Apache (incubating) Toree
# + id="y-LgpcBTbLDZ" colab_type="code" colab={}
# !pip install https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz
# + id="N6FHa4RzeXt0" colab_type="code" colab={}
# !jupyter toree install
# + [markdown] id="NBA4rQFM1P63" colab_type="text"
# # [Install beakerx: first time](http://beakerx.com/documentation)
#
# https://github.com/twosigma/beakerx
#
# https://github.com/twosigma/beakerx/blob/00659097665cb78bdfd095882df02224e2699ba1/doc/groovy/PolyglotMagic.ipynb
#
# https://github.com/twosigma/beakerx/blob/00659097665cb78bdfd095882df02224e2699ba1/doc/scala/Scala.ipynb
# + id="0gKzr8vB1TfS" colab_type="code" colab={}
# !conda create -y -n beakerx 'python>=3'
# !. activate beakerx
# !conda config --env --add pinned_packages 'openjdk>8.0.121'
# !conda install -y -c conda-forge ipywidgets beakerx
# + id="bo4ILfsqjOOc" colab_type="code" colab={}
# !beakerx
# + id="Q45RGgdx1-AZ" colab_type="code" colab={}
# !jupyter kernelspec list
# + id="nbR8-omi3OoO" colab_type="code" colab={} language="scala"
# new TableDisplay(Seq(Map("a" -> 1, "b" -> 2, "c" -> 3),
# Map("a" -> 4, "b" -> 5, "c" -> 6),
# Map("a" -> 7, "b" -> 8, "c" -> 8)))
# + id="LiJL8HuY4mki" colab_type="code" colab={}
import os
os.getcwd()
# + [markdown] id="_UQiHuGT-vAK" colab_type="text"
#
# + id="SVQfP9DS5YuF" colab_type="code" colab={}
import findspark
findspark.init()
# + [markdown] id="zA68M6An8Rhq" colab_type="text"
# # Install pyspark
#
# + id="QaHILu-f8TgQ" colab_type="code" colab={}
# !pip install pyspark
# + id="mNeOMa5p8UBT" colab_type="code" colab={}
# !pip install --upgrade pip
# + [markdown] id="W3yh5l6i-HQG" colab_type="text"
# # install apache toree
# + id="pIY09tkK-Jlh" colab_type="code" colab={}
# !pip install --upgrade --pre toree
# + id="mmuzS356-nWA" colab_type="code" colab={}
# !pip install virtualenv
# + id="I5c4R2pK_uvA" colab_type="code" colab={}
# !virtualenv -p python3.6 env3
# + id="mha2nggF_zj7" colab_type="code" colab={}
# !. env3/bin/activate # source with dot notation
# + [markdown] id="_vUOhFNba1D6" colab_type="text"
# # Install Java, Spark, and Findspark
# + id="WyeqxJ3Xa5h8" colab_type="code" colab={}
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
# !wget -q http://apache.osuosl.org/spark/spark-2.2.2/spark-2.2.2-bin-hadoop2.7.tgz
# !tar xf spark-2.2.2-bin-hadoop2.7.tgz
# !pip install -q findspark
# + [markdown] id="eaw5zlXxa820" colab_type="text"
# #### Set Environment Variables
# Set the locations where Spark and Java are installed.
# + id="F3aybSPha-Tv" colab_type="code" colab={}
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.2.2-bin-hadoop2.7"
# + [markdown] id="Y5A12lwfbB1f" colab_type="text"
# #### Start a SparkSession
# This will start a local Spark session.
# + id="1xky5yhcbErG" colab_type="code" colab={}
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
# + [markdown] id="_otb-GVHbHNt" colab_type="text"
# #### Use Spark!
# That's all there is to it - you're ready to use Spark!
# + id="IY4FudM5bKCy" colab_type="code" colab={}
df = spark.createDataFrame([{"hello": "world"} for x in range(1000)])
df.show(3)
# + [markdown] id="0X0RTEfsfS9l" colab_type="text"
# #### Stop spark
# + id="kipQIDzpdTjV" colab_type="code" colab={}
spark.stop()
# + [markdown] id="4zh_3cLNknyY" colab_type="text"
# # Uploading and Using Data Files
#
# uploads from local computer
# + id="Z7tH0DhofXnq" colab_type="code" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7Ci8vIE1heCBhbW91bnQgb2YgdGltZSB0byBibG9jayB3YWl0aW5nIGZvciB0aGUgdXNlci4KY29uc3QgRklMRV9DSEFOR0VfVElNRU9VVF9NUyA9IDMwICogMTAwMDsKCmZ1bmN0aW9uIF91cGxvYWRGaWxlcyhpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IHN0ZXBzID0gdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKTsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIC8vIENhY2hlIHN0ZXBzIG9uIHRoZSBvdXRwdXRFbGVtZW50IHRvIG1ha2UgaXQgYXZhaWxhYmxlIGZvciB0aGUgbmV4dCBjYWxsCiAgLy8gdG8gdXBsb2FkRmlsZXNDb250aW51ZSBmcm9tIFB5dGhvbi4KICBvdXRwdXRFbGVtZW50LnN0ZXBzID0gc3RlcHM7CgogIHJldHVybiBfdXBsb2FkRmlsZXNDb250aW51ZShvdXRwdXRJZCk7Cn0KCi8vIFRoaXMgaXMgcm91Z2hseSBhbiBhc3luYyBnZW5lcmF0b3IgKG5vdCBzdXBwb3J0ZWQgaW4gdGhlIGJyb3dzZXIgeWV0KSwKLy8gd2hlcmUgdGhlcmUgYXJlIG11bHRpcGxlIGFzeW5jaHJvbm91cyBzdGVwcyBhbmQgdGhlIFB5dGhvbiBzaWRlIGlzIGdvaW5nCi8vIHRvIHBvbGwgZm9yIGNvbXBsZXRpb24gb2YgZWFjaCBzdGVwLgovLyBUaGlzIHVzZXMgYSBQcm9taXNlIHRvIGJsb2NrIHRoZSBweXRob24gc2lkZSBvbiBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcCwKLy8gdGhlbiBwYXNzZXMgdGhlIHJlc3VsdCBvZiB0aGUgcHJldmlvdXMgc3RlcCBhcyB0aGUgaW5wdXQgdG8gdGhlIG5leHQgc3RlcC4KZnVuY3Rpb24gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpIHsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIGNvbnN0IHN0ZXBzID0gb3V0cHV0RWxlbWVudC5zdGVwczsKCiAgY29uc3QgbmV4dCA9IHN0ZXBzLm5leHQob3V0cHV0RWxlbWVudC5sYXN0UHJvbWlzZVZhbHVlKTsKICByZXR1cm4gUHJvbWlzZS5yZXNvbHZlKG5leHQudmFsdWUucHJvbWlzZSkudGhlbigodmFsdWUpID0+IHsKICAgIC8vIENhY2hlIHRoZSBsYXN0IHByb21pc2UgdmFsdWUgdG8gbWFrZSBpdCBhdmFpbGFibGUgdG8gdGhlIG5leHQKICAgIC8vIHN0ZXAgb2YgdGhlIGdlbmVyYXRvci4KICAgIG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSA9IHZhbHVlOwogICAgcmV0dXJuIG5leHQudmFsdWUucmVzcG9uc2U7CiAgfSk7Cn0KCi8qKgogKiBHZW5lcmF0b3IgZnVuY3Rpb24gd2hpY2ggaXMgY2FsbGVkIGJldHdlZW4gZWFjaCBhc3luYyBzdGVwIG9mIHRoZSB1cGxvYWQKICogcHJvY2Vzcy4KICogQHBhcmFtIHtzdHJpbmd9IGlucHV0SWQgRWxlbWVudCBJRCBvZiB0aGUgaW5wdXQgZmlsZSBwaWNrZXIgZWxlbWVudC4KICogQHBhcmFtIHtzdHJpbmd9IG91dHB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIG91dHB1dCBkaXNwbGF5LgogKiBAcmV0dXJuIHshSXRlcmFibGU8IU9iamVjdD59IEl0ZXJhYmxlIG9mIG5leHQgc3RlcHMuCiAqLwpmdW5jdGlvbiogdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKSB7CiAgY29uc3QgaW5wdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoaW5wdXRJZCk7CiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gZmFsc2U7CgogIGNvbnN0IG91dHB1dEVsZW1lbnQgPSBkb2N1bWVudC5nZXRFbGVtZW50QnlJZChvdXRwdXRJZCk7CiAgb3V0cHV0RWxlbWVudC5pbm5lckhUTUwgPSAnJzsKCiAgY29uc3QgcGlja2VkUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBpbnB1dEVsZW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignY2hhbmdlJywgKGUpID0+IHsKICAgICAgcmVzb2x2ZShlLnRhcmdldC5maWxlcyk7CiAgICB9KTsKICB9KTsKCiAgY29uc3QgY2FuY2VsID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnYnV0dG9uJyk7CiAgaW5wdXRFbGVtZW50LnBhcmVudEVsZW1lbnQuYXBwZW5kQ2hpbGQoY2FuY2VsKTsKICBjYW5jZWwudGV4dENvbnRlbnQgPSAnQ2FuY2VsIHVwbG9hZCc7CiAgY29uc3QgY2FuY2VsUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBjYW5jZWwub25jbGljayA9ICgpID0+IHsKICAgICAgcmVzb2x2ZShudWxsKTsKICAgIH07CiAgfSk7CgogIC8vIENhbmNlbCB1cGxvYWQgaWYgdXNlciBoYXNuJ3QgcGlja2VkIGFueXRoaW5nIGluIHRpbWVvdXQuCiAgY29uc3QgdGltZW91dFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgc2V0VGltZW91dCgoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9LCBGSUxFX0NIQU5HRV9USU1FT1VUX01TKTsKICB9KTsKCiAgLy8gV2FpdCBmb3IgdGhlIHVzZXIgdG8gcGljayB0aGUgZmlsZXMuCiAgY29uc3QgZmlsZXMgPSB5aWVsZCB7CiAgICBwcm9taXNlOiBQcm9taXNlLnJhY2UoW3BpY2tlZFByb21pc2UsIHRpbWVvdXRQcm9taXNlLCBjYW5jZWxQcm9taXNlXSksCiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdzdGFydGluZycsCiAgICB9CiAgfTsKCiAgaWYgKCFmaWxlcykgewogICAgcmV0dXJuIHsKICAgICAgcmVzcG9uc2U6IHsKICAgICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICAgIH0KICAgIH07CiAgfQoKICBjYW5jZWwucmVtb3ZlKCk7CgogIC8vIERpc2FibGUgdGhlIGlucHV0IGVsZW1lbnQgc2luY2UgZnVydGhlciBwaWNrcyBhcmUgbm90IGFsbG93ZWQuCiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gdHJ1ZTsKCiAgZm9yIChjb25zdCBmaWxlIG9mIGZpbGVzKSB7CiAgICBjb25zdCBsaSA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2xpJyk7CiAgICBsaS5hcHBlbmQoc3BhbihmaWxlLm5hbWUsIHtmb250V2VpZ2h0OiAnYm9sZCd9KSk7CiAgICBsaS5hcHBlbmQoc3BhbigKICAgICAgICBgKCR7ZmlsZS50eXBlIHx8ICduL2EnfSkgLSAke2ZpbGUuc2l6ZX0gYnl0ZXMsIGAgKwogICAgICAgIGBsYXN0IG1vZGlmaWVkOiAkewogICAgICAgICAgICBmaWxlLmxhc3RNb2RpZmllZERhdGUgPyBmaWxlLmxhc3RNb2RpZmllZERhdGUudG9Mb2NhbGVEYXRlU3RyaW5nKCkgOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAnbi9hJ30gLSBgKSk7CiAgICBjb25zdCBwZXJjZW50ID0gc3BhbignMCUgZG9uZScpOwogICAgbGkuYXBwZW5kQ2hpbGQocGVyY2VudCk7CgogICAgb3V0cHV0RWxlbWVudC5hcHBlbmRDaGlsZChsaSk7CgogICAgY29uc3QgZmlsZURhdGFQcm9taXNlID0gbmV3IFByb21pc2UoKHJlc29sdmUpID0+IHsKICAgICAgY29uc3QgcmVhZGVyID0gbmV3IEZpbGVSZWFkZXIoKTsKICAgICAgcmVhZGVyLm9ubG9hZCA9IChlKSA9PiB7CiAgICAgICAgcmVzb2x2ZShlLnRhcmdldC5yZXN1bHQpOwogICAgICB9OwogICAgICByZWFkZXIucmVhZEFzQXJyYXlCdWZmZXIoZmlsZSk7CiAgICB9KTsKICAgIC8vIFdhaXQgZm9yIHRoZSBkYXRhIHRvIGJlIHJlYWR5LgogICAgbGV0IGZpbGVEYXRhID0geWllbGQgewogICAgICBwcm9taXNlOiBmaWxlRGF0YVByb21pc2UsCiAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgYWN0aW9uOiAnY29udGludWUnLAogICAgICB9CiAgICB9OwoKICAgIC8vIFVzZSBhIGNodW5rZWQgc2VuZGluZyB0byBhdm9pZCBtZXNzYWdlIHNpemUgbGltaXRzLiBTZWUgYi82MjExNTY2MC4KICAgIGxldCBwb3NpdGlvbiA9IDA7CiAgICB3aGlsZSAocG9zaXRpb24gPCBmaWxlRGF0YS5ieXRlTGVuZ3RoKSB7CiAgICAgIGNvbnN0IGxlbmd0aCA9IE1hdGgubWluKGZpbGVEYXRhLmJ5dGVMZW5ndGggLSBwb3NpdGlvbiwgTUFYX1BBWUxPQURfU0laRSk7CiAgICAgIGNvbnN0IGNodW5rID0gbmV3IFVpbnQ4QXJyYXkoZmlsZURhdGEsIHBvc2l0aW9uLCBsZW5ndGgpOwogICAgICBwb3NpdGlvbiArPSBsZW5ndGg7CgogICAgICBjb25zdCBiYXNlNjQgPSBidG9hKFN0cmluZy5mcm9tQ2hhckNvZGUuYXBwbHkobnVsbCwgY2h1bmspKTsKICAgICAgeWllbGQgewogICAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgICBhY3Rpb246ICdhcHBlbmQnLAogICAgICAgICAgZmlsZTogZmlsZS5uYW1lLAogICAgICAgICAgZGF0YTogYmFzZTY0LAogICAgICAgIH0sCiAgICAgIH07CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPQogICAgICAgICAgYCR7TWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCl9JSBkb25lYDsKICAgIH0KICB9CgogIC8vIEFsbCBkb25lLgogIHlpZWxkIHsKICAgIHJlc3BvbnNlOiB7CiAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgIH0KICB9Owp9CgpzY29wZS5nb29nbGUgPSBzY29wZS5nb29nbGUgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYiA9IHNjb3BlLmdvb2dsZS5jb2xhYiB8fCB7fTsKc2NvcGUuZ29vZ2xlLmNvbGFiLl9maWxlcyA9IHsKICBfdXBsb2FkRmlsZXMsCiAgX3VwbG9hZEZpbGVzQ29udGludWUsCn07Cn0pKHNlbGYpOwo=", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 69} outputId="527f9b60-4a04-4628-fde3-294cf3f22e58"
from google.colab import files
uploaded = files.upload()
# + [markdown] id="3alLLybNlEhy" colab_type="text"
# ## Iterate uploaded files
#
# + id="SQsTJFN1kroJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="586b2ab5-2b4b-4637-f49b-78fa39273906"
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(name=fn, length=len(uploaded[fn])))
# + [markdown] id="p8DobPkdlKDV" colab_type="text"
# ## [load the contents of the file](https://stackoverflow.com/a/49226782)
#
# An example of loading contents of file into a Pandas DataFrame is shown below:
#
# + id="QNgqqyyTlGx3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1105} outputId="855bbf4c-6414-4e95-8a73-102aca5ab653"
import pandas as pd
import io
df = pd.read_csv(io.StringIO(uploaded['iris.csv'].decode('utf-8')))
print(df)
| Scala_on_Google_Colaboratory.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 1. Creamos los modelos de lenguaje
import os
import pickle
from collections import Counter, defaultdict
def deserialize_object(path):
pickle_in = open(path,"rb")
obj = pickle.load(pickle_in)
pickle_in.close()
print("Cargado el objeto", path.split("/")[- 1])
return obj
path_dic = 'dic_specialties.pkl'
dic_spe = deserialize_object(path_dic)
path_save_model_lang = './models_languages/'
def read_diccionary_specialties(dic_spe, path_save_model_lang):
for specialty, dic in dic_spe.items():
print("Especialidad:", specialty)
list_doc_specialty = dic['docs']
dic_terms = dic['terms']
# fichero donde guardaremos el modelo de lenguaje
fout = open(path_save_model_lang + specialty + '.ml', 'w')
# creamos el modelo de lenguaje
model = defaultdict(lambda: defaultdict(lambda: 0))
for term, list_doc_term in dic_terms.items():
if type(term) is tuple:
# bigramas
if len(term) == 2:
w1 = term[0]
w2 = term[1]
model[(w1)][w2] += len(list_doc_term)
# trigramas
elif len(term) == 3:
w1 = term[0]
w2 = term[1]
w3 = term[2]
model[(w1, w2)][w3] += len(list_doc_term)
# some examples
#print (model["enfermedad", "de"]["alzheimer"])
print(">>Escribiendo modelo")
# transformamos en probabilidades
for tuple_1 in model:
total_count = float(sum(model[tuple_1].values()))
#print("Total count:" , tuple_1, '->', total_count)
for tuple_2 in model[tuple_1]:
model[tuple_1][tuple_2] /= total_count
fout.write(str(tuple_1) + ' ' + str(tuple_2) + ' ' + str(model[tuple_1][tuple_2]) + "\n")
print(">>Escribiendo unigramas")
# unigramas
for term, list_doc_term in dic_terms.items():
if not type(term) is tuple:
fout.write(str(term) + ' ' + str(len(list_doc_term)) + "\n")
fout.close()
print("Fichero escrito")
read_diccionary_specialties(dic_spe, path_save_model_lang)
| 04_language_model/01_create_model_language.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Lead Scoring
# **Authors**: <NAME> & <NAME>
# In this exercise we are going to use a strategy called “lead scoring” to predict the probability that a prospect will become a customer. To achieve this, we are going to use binary classification.
#
# **There are a few things that you need for this exercise:**
#
# 1. Your DataRobot login
# 2. Your DataRobot API key
# 3. The exercise dataset: bank-full.csv
#
# ## Credentials
# To access the DataRobot API, you need to connect to it. To make sure only authorized users are accessing the DataRobot API, you need to provide your username, password or API token. You also need to ensure your "API Access" configuration is ON (please ask your administrator if not).
#
# To find your API Token, visit YOUR_API_HOST, log in, and look under **Developer Tools** (under the person icon).
#
#
# ## Import Library and Enter Credentials
#
# You must import the appropriate libaries and enter your credentials to connect with DataRobot.
import datarobot as dr
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# +
token = 'YOUR_TOKEN'
endpoint = 'YOUR_DR_URL/api/v2'
dr.Client(token=token,
endpoint= endpoint)
# -
# ## Dataset
#
# The dataset was taken from the UCI Machine Learning Repository. It was published in a paper by <NAME> and colleagues in 2014.
#
# *[Moro et al., 2014] <NAME>, <NAME> and <NAME>. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014*
#
# This dataset includes information from a direct telemarketing campaign of a Portuguese bank. The target is indicated by the feature “**y**” and a “yes” means that the prospect purchased the product being offered and “no” means that they did not.
#
#
df = pd.read_csv('bank-full.csv', sep = ";")
df.head()
# ## Start Project
#
# For the setup, start the project with the dataset (**bank-full.csv**) and indicate the target as “**y**”. Set the mode to "**Quick**".
#
project = dr.Project.create(project_name='bank-full.csv',
sourcedata= df)
# +
project.set_target(
target='y',
worker_count = '-1',
mode=dr.AUTOPILOT_MODE.QUICK
)
project.wait_for_autopilot() #Wait for autopilot to complete
# -
# It can be onerous to rerun Autopilot every time you want to run the script. If your project is already created, then you can comment out the previous chunk of code. This will make sure you do not rerun Autopilot. You can then simply refer to the project using the GetProject function (see code below). The project id refers to the first number in the URL for the project.
#
# +
#project = dr.Project.get(project_id='YOUR_PROJECT_ID')
# -
# ## Select Model to Evaluate
#
# You want to select the 80% version of the top model to evaluate. You can use the code below to select this model.
models =project.get_models(
search_params={
'sample_pct__gt': 80,
})
model = models[1]
model.id
# ## Get Validation Scores
#
# You can get the validation and cross-validation scores for every possible metric of the model using the code below. This can be pulled for multiple models if you want to compare them programmatically.
#
model.metrics
# ## Get ROC Curve
#
# Now that we know the overall performance of the model, let's take a deeper look at the ROC Curve. You can use the code below to pull the ROC chart from DataRobot and plot it.
# +
roc = model.get_roc_curve('crossValidation')
#Save the result into a pandas dataframe
df = pd.DataFrame(roc.roc_points)
df.head()
# -
# ## Plot the ROC Curve
# +
dr_roc_green = '#03c75f'
white = '#ffffff'
dr_purple = '#65147D'
dr_dense_green = '#018f4f'
dr_dark_blue = '#08233F'
fig = plt.figure(figsize=(8, 8))
axes = fig.add_subplot(1, 1, 1, facecolor=dr_dark_blue)
plt.scatter(df.false_positive_rate, df.true_positive_rate, color=dr_roc_green)
plt.plot(df.false_positive_rate, df.true_positive_rate, color=dr_roc_green)
plt.plot([0, 1], [0, 1], color=white, alpha=0.25)
plt.title('ROC curve')
plt.xlabel('False Positive Rate (Fallout)')
plt.xlim([0, 1])
plt.ylabel('True Positive Rate (Sensitivity)')
plt.ylim([0, 1])
# -
# ## Get the Feature Impact
#
# Now that we have evaluated our model, let's take a look at which features have the highest impact.
# +
#Get Feature Impact
feature_impact = model.get_or_request_feature_impact()
#Save feature impact in pandas dataframe
fi_df = pd.DataFrame(feature_impact)
fi_df
# -
# ## Plot Feature Impact
#
#
# Feature impact is calculated using Permutation Importance. We can see that the most impactful feature is **duration**, followed by **month** and **day**.
#
#
# +
fig, ax = plt.subplots(figsize = (12,5))
#Plot feature impact
sns.barplot(x='impactNormalized', y='featureName', data=fi_df, color='#2D8FE2')
# -
# ## Get Holdout Predictions
#
# By default DataRobot does a five-fold cross-validation and 20% holdout. The holdout data was not used during the training and we can pull these scores to see how our model predicted on new data.
training_predictions_job = model.request_training_predictions(dr.enums.DATA_SUBSET.HOLDOUT)
training_predictions = training_predictions_job.get_result_when_complete()
# Downlaod to a CSV file.
training_predictions.download_to_csv('predictions.csv')
# ### Other Analyses to Try
#
# You can do a lot programmatically with the API. You can get confusion matrices, lift charts, word clouds, and even create model factories. Check out the [tutorials on our GitHub](https://github.com/datarobot-community/tutorials-for-data-scientists) page!
#
| Classification/Python/lead_scoring_bank_marketing/Lead Scoring.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="UvkYjiM9Ywyo"
# # Data Storytelling on Titanic
#
# 
#
# The sinking of the Titanic is one of the most infamous shipwrecks in history.
#
# On April 15, 1912, during her maiden voyage, the widely considered “unsinkable” RMS Titanic sank after colliding with an iceberg. Unfortunately, there weren’t enough lifeboats for everyone onboard, resulting in the death of 1502 out of 2224 passengers and crew.
#
# While there was some element of luck involved in surviving, it seems some groups of people were more likely to survive than others. Let's try and visualize some interesting patterns in this dataset
#
#
# + [markdown] colab_type="text" id="FZaRWKFDZdAy"
# # Load Dependencies
# + colab={} colab_type="code" id="UZetwNDgYwyq"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
sns.set()
# + [markdown] colab_type="text" id="U04bJenIYwys"
# ## Load Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 33} colab_type="code" id="lXYjiagaYwys" outputId="a0735db4-ca0a-4686-a834-b86539e376c2"
df = pd.read_csv("https://raw.githubusercontent.com/dipanjanS/appliedml_workshop_dhs_av_2019/master/Module%2003%20-%20Data%20Visualization%20_Storytelling/titanic.csv")
df.shape
# + [markdown] colab_type="text" id="dFw8PT1YYwyu"
# ### Sample records
# + colab={"base_uri": "https://localhost:8080/", "height": 196} colab_type="code" id="rS7lT-XxYwyv" outputId="32094bce-a876-4f1e-a1fd-a8e1d9794ad2"
df.head()
# + [markdown] colab_type="text" id="96BRAo5KZwFM"
# # Dataset Details
#
# 
# + [markdown] colab_type="text" id="EWzqbIIGYwyw"
# ### Basic Stats
# + colab={"base_uri": "https://localhost:8080/", "height": 301} colab_type="code" id="t0u_GG_TYwyx" outputId="0836d0ee-c1ab-42f4-9303-053e92e36cfd"
df.info()
# + [markdown] colab_type="text" id="65uykptKYwyz"
# Cabin seems to have lots of missing data, as do attributes like Age and Embarked.
# + [markdown] colab_type="text" id="G-HgX87KYwyz"
# ## Story Begins...
# + [markdown] colab_type="text" id="iw78BqwEYwy0"
# ### Visualising missing values using heatmaps
# + colab={"base_uri": "https://localhost:8080/", "height": 384} colab_type="code" id="k5AK5opzYwy0" outputId="0dbd6699-6c01-4e6a-ddd9-cb47c1179390"
# set plotting canvas to 9x5 dimensions
fig, ax = plt.subplots(figsize=(9,5))
sns.heatmap(df.isnull(), cbar=False,cmap=sns.color_palette("Paired"));
# + [markdown] colab_type="text" id="1uUvcQ4NYwy2"
# ### Who Survived?
# + colab={"base_uri": "https://localhost:8080/", "height": 427} colab_type="code" id="8xzhYZARYwy2" outputId="496595be-2a62-4b38-ab95-b4bcffea57d1"
fig, ax = plt.subplots(1, 2, figsize=(16, 7))
df['Survived'][df['Sex'] == 'female'].value_counts().plot.pie(
explode=[0, 0.3], autopct='%1.2f%%', ax=ax[0], shadow=True)
df['Survived'][df['Sex'] == 'male'].value_counts().plot.pie(
explode=[0, 0.3], autopct='%1.2f%%', ax=ax[1], shadow=True)
ax[0].set_title('%age of Female Survivors')
ax[1].set_title('%age of Male Survivors');
# + [markdown] colab_type="text" id="kIV7Zz_wYwy4"
# _* Humans are bad at reading angles. Use Pie Charts carefully.
# See more here: https://www.geckoboard.com/blog/pie-charts/_
# + colab={"base_uri": "https://localhost:8080/", "height": 284} colab_type="code" id="KZOCoDAeYwy4" outputId="ed40164e-1641-4a72-9409-2575e2dffb93"
df.groupby('Sex').agg({'Survived':'sum'}).plot(kind='barh')
plt.title('Survivors');
# + [markdown] colab_type="text" id="xOaG9ZxXYwy6"
# Women were more likely to survive than men
# + [markdown] colab_type="text" id="9MXLaChtYwy6"
# ### Does Money make you Safer?
# + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="6G277C4GYwy7" outputId="12bb3c4d-8d40-4574-d4d0-0274beab3544"
pd.crosstab(
df.Pclass, df.Survived,
margins=True).style.background_gradient(cmap='summer_r')
# + colab={"base_uri": "https://localhost:8080/", "height": 84} colab_type="code" id="CKMl7EfGYwy8" outputId="cc40f89d-3547-4dbc-bf70-959b5807f199"
print("% of survivals in")
print("Pclass=1 : {}%".format(
round(
df.Survived[df.Pclass == 1].sum() /
df[df.Pclass == 1].Survived.count(), 5) * 100))
print("Pclass=2 : {}%".format(
round(
df.Survived[df.Pclass == 2].sum() /
df[df.Pclass == 2].Survived.count(), 5) * 100))
print("Pclass=3 : {}%".format(
round(
df.Survived[df.Pclass == 3].sum() /
df[df.Pclass == 3].Survived.count(), 5) * 100))
# + [markdown] colab_type="text" id="JHT6K03sYwy-"
# ### Did Point of Embarkation have any impact?
# + colab={"base_uri": "https://localhost:8080/", "height": 365} colab_type="code" id="6IqgPFOVYwy_" outputId="5ca49389-8b57-44b2-cd8c-7ec3d07d6508"
sns.catplot(x='Survived', col='Embarked', kind='count', data=df);
# + [markdown] colab_type="text" id="f5Q6WUoPYwzA"
# ### Cross Tab of Class, Gender and Embarkation point with Survival
# + colab={"base_uri": "https://localhost:8080/", "height": 156} colab_type="code" id="T6fUKoooYwzB" outputId="0a554cfa-bee5-4957-93ce-f841c44a0d56"
pd.crosstab(
index=[df.Survived],
columns=[df.Sex, df.Pclass, df.Embarked],
margins=True).style.background_gradient(cmap='summer_r')
# + [markdown] colab_type="text" id="gpAPIeRoYwzC"
# __Insights__
# # + Almost all women from Pclass 2 who embarked at C and Q survived
# # + Nearly all women of Pclass 1 survived.
# # + All men of Pclass 1 and 2 who embarked at Q died!
# + [markdown] colab_type="text" id="Xl06JIRtYwzD"
# ### Age Distribution
# + colab={"base_uri": "https://localhost:8080/", "height": 301} colab_type="code" id="i-A41LAmYwzD" outputId="76bfa6d2-0400-407a-b492-efd0a66cfa7b"
sns.boxplot(x='Embarked', y='Age', data=df)
plt.title("Age distribution Vs Embarked Port");
# + [markdown] colab_type="text" id="iGz9b1NwYwzF"
# ### Advanced Plots: Swarm/Violin Plots
# + colab={} colab_type="code" id="thd_B4UhYwzF"
cm_surv = ["grey" , "green"]
# + colab={"base_uri": "https://localhost:8080/", "height": 464} colab_type="code" id="awiLfxXeYwzH" outputId="59a56790-d326-4687-b3ee-51f753bd2086"
fig, ax = plt.subplots(figsize=(13,7))
sns.swarmplot(x='Pclass', y='Age', hue='Survived', dodge=True, data=df , palette=cm_surv, size=7, ax=ax)
plt.title('Survivals Vs Age and Pclass ');
# + [markdown] colab_type="text" id="_x9E0ouGYwzI"
# Kids in Class 3 had a really bad time!
# + [markdown] colab_type="text" id="6Td5YjDaYwzJ"
# ### How were the Tickets Priced?
# + colab={"base_uri": "https://localhost:8080/", "height": 285} colab_type="code" id="lbQgD5hJYwzJ" outputId="c4bc45c6-f6f6-4c40-97c5-bdd561826f81"
sns.distplot(df['Fare']);
# + [markdown] colab_type="text" id="ubKJyeKsYwzM"
# ### Correlation between attributes
# + colab={"base_uri": "https://localhost:8080/", "height": 434} colab_type="code" id="nooeBxJbYwzN" outputId="a58da7a6-97b7-477f-a43a-152fa534f94b"
corr = df.corr()
f,ax = plt.subplots(figsize=(8, 6))
sns.heatmap(corr, annot=True, linewidths=1.5 , fmt='.2f',ax=ax);
# This has to be done for a bug in seaborn\matplotlib which should get fixed with matplotlib 3.1.2
b, t = ax.get_ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
ax.set_ylim(b, t); # update the ylim(bottom, top) values
# + [markdown] colab_type="text" id="Z9Yxx4cxYwzP"
# We see a lot of negative correlations.
# With most columns being categorical, we need to perform some transformations
# + colab={} colab_type="code" id="RZ2sTu_nYwzQ"
# add bins for fare and age
df_ml = pd.get_dummies(df, columns=['Sex', 'Embarked', 'Pclass'], drop_first=True)
df_ml.drop(['PassengerId','Name','Ticket', 'Cabin', 'Age', 'Fare'],axis=1,inplace=True)
df_ml.dropna(inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 437} colab_type="code" id="dRNqtNNwYwzR" outputId="32e17fd3-c9c5-4683-a648-a3d59e4188f7"
corr = df_ml.corr()
f,ax = plt.subplots(figsize=(9,6))
sns.heatmap(corr, annot = True, linewidths=1.5 , fmt = '.2f',ax=ax)
# This has to be done for a bug in seaborn\matplotlib which should get fixed with matplotlib 3.1.2
b, t = ax.get_ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
ax.set_ylim(b, t); # update the ylim(bottom, top) values
| Chapter 03 - Data Visualization and Storytelling/data_visualization_storytelling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_operators:
# + raw_mimetype="text/restructuredtext" active=""
# .. toctree::
# :maxdepth: 1
# :hidden:
#
# sampling.ipynb
# selection.ipynb
# crossover.ipynb
# mutation.ipynb
# repair.ipynb
#
# -
# ## Operators
#
# Operators are the key to customize genetic algorithms. In the following the different type of operators are listed. For details about each operator we refer to our corresponding documentation.
# ### Sampling
# |Name|Convenience|
# |---|---|
# |[Random](sampling.ipynb)|"(real\|int\|real)_random"|
# |[Latin Hypercube Sampling](sampling.ipynb)|"real_lhs"|
# |[Random Permutation Sampling](sampling.ipynb)|"perm_random"|
# ### Selection
# |Name|Convenience|
# |---|---|
# |[Random](selection.ipynb)|"random"|
# |[Tournament Selection](selection.ipynb)|"tournament"|
# ### Mutation
# |Name|Convenience|
# |---|---|
# |[Polynomial](mutation.ipynb)|"(real\|int)_pm"|
# |[Bitflip](mutation.ipynb)|"bin_bitflip"|
# |[Inverse Mutation](mutation.ipynb)|"perm_inv"|
# ### Crossover
# |Name|Convenience|
# |---|---|
# |[Simulated Binary](crossover.ipynb)|"(real\|int)_sbx"|
# |[Uniform](crossover.ipynb)|"(real\|bin\|int)_ux"|
# |[Half Uniform](crossover.ipynb)|"(bin\|int)_hux"|
# |[Differential Evolution](crossover.ipynb)|"real_de"|
# |[One Point](crossover.ipynb)|"(real\|int\|real)_one_point"|
# |[Two Point](crossover.ipynb)|"(real\|int\|real)_two_point"|
# |[K Point](crossover.ipynb)|"(real\|int\|real)_k_point"|
# |[Exponential](crossover.ipynb)|"(real\|bin\|int)_exp"|
# |[Order Crossover](crossover.ipynb)|"perm_ox"|
# |[Edge Recombination Crossover](crossover.ipynb)|"perm_erx"|
| doc/source/operators/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 在量子化学计算中应用量子变分求解器
#
#
# `Linux` `CPU` `全流程` `初级` `中级` `高级`
#
# [](https://gitee.com/mindspore/docs/blob/master/docs/mindquantum/docs/source_zh_cn/vqe_for_quantum_chemistry.ipynb) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/mindquantum/zh_cn/mindspore_vqe_for_quantum_chemistry.ipynb) [](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvbWluZHF1YW50dW0vemhfY24vbWluZHNwb3JlX3ZxZV9mb3JfcXVhbnR1bV9jaGVtaXN0cnkuaXB5bmI=&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c)
#
# ## 概述
#
# 量子化学,指的是运用量子力学的基本理论及方法,求解含时或定态薛定谔方程的数值解。在高性能计算机上进行量子化学模拟已成为研究材料的物理、化学性质的重要手段。然而,精确求解薛定谔方程具有指数级的复杂度,可模拟的化学体系规模严重受制于此。近年量子计算的发展为解决这个问题提供了一条可行的路,有望在量子计算机上实现多项式复杂度下对薛定谔方程的高精度求解。
#
# [Peruzzo等人](https://doi.org/10.1038/ncomms5213)在2014年首次将量子变分求解器(Variational quantum eigensolver, VQE)结合[幺正耦合簇理论](https://linkinghub.elsevier.com/retrieve/pii/S0009261489873725)用于量子化学的模拟中,实现了He-H<sup>+</sup>基态能量的求解。量子变分求解器是一个量子--经典混合算法,在基于量子算法的化学模拟中应用广泛,本教程将介绍使用量子变分求解器求解分子体系基态能量的方法。
#
# 本教程的主要内容包括如下几个部分:
#
# 1. 量子化学原理简介。
# 2. 量子变分求解器的应用。
# 3. 使用MindQuantum实现高效自动求导的VQE模拟。
#
# > 本文档适用于CPU环境。
# > 你可以在这里找到完整的可运行的样例代码:<https://gitee.com/mindspore/mindquantum/blob/master/tutorials/source/vqe_for_quantum_chemistry.py>。
#
# ## 环境准备
#
# 本教程需要安装以下环境:
#
# - NumPy
# - SciPy
# - [mindquantum](https://gitee.com/mindspore/mindquantum)
# - [mindspore](https://gitee.com/mindspore/mindspore)
# - PySCF
# - openfermion
# - openfermionpyscf
#
# > 以上依赖都可通过`pip`命令来安装。
#
#
# ## 导入依赖
#
# 导入本教程所依赖模块
# +
import numpy as np
from openfermion.chem import MolecularData
from openfermionpyscf import run_pyscf
import mindquantum as mq
from mindquantum import Circuit, X, RX, Hamiltonian
from mindquantum.circuit import generate_uccsd
from mindquantum.nn import generate_pqc_operator
import mindspore as ms
import mindspore.context as context
from mindspore.common.parameter import Parameter
from mindspore.common.initializer import initializer
context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
# -
# ## 量子化学计算方法
#
# 量子化学的核心问题在于求解薛定谔方程(Schrödinger Equation)。一般来说,求解含时薛定谔方程(Time-dependent Schrödinger Equation)较为复杂,故引入玻恩-奥本海默近似(Born-Oppenheimer approximation, BO approximation)。BO近似认为,原子核质量远大于电子、运动速度远低于电子,故可以将两者进行分离变量,单独讨论原子核或电子的运动,于是可得到如下不含时的电子运动方程,也称为定态薛定谔方程:
#
# $$
# \hat{H} |\Psi\rangle = E |\Psi\rangle
# $$
#
# 其中$\hat{H}$包含以下三项:
#
# $$
# \hat{H} = \hat{K} _{e} + \hat{V} _{ee} + \hat{V} _{Ne}
# $$
#
# 分别为电子动能、电子-电子势能和电子-核势能。
#
# 有多种数值算法可以求解定态薛定谔方程。本教程将介绍其中的一类:波函数方法。波函数方法直接求解给定分子哈密顿量的本征波函数和本征能量,目前有大量的开源软件包可实现,如[PySCF](http://pyscf.org/)等。此处从一个简单的例子:氢化锂分子开始,使用openfermion结合openfermionpyscf插件进行。首先定义分子结构:
dist = 1.5
geometry = [
["Li", [0.0, 0.0, 0.0 * dist]],
["H", [0.0, 0.0, 1.0 * dist]],
]
basis = "sto3g"
spin = 0
print("Geometry: \n", geometry)
# 上面的代码定义了一个Li-H键长为1.5Å分子。使用STO-3G基组进行计算。接下来使用openfermionpyscf,调用PySCF进行HF、CCSD和FCI计算。这三种方法属于波函数方法,开始计算之前,先对这些方法作一个简单的介绍。
#
# ### 波函数方法
#
# 求解定态薛定谔方程的方法之一是[Hartree-Fock(HF)](https://doi.org/10.1098/rspa.1935.0085)方法,该方法在二十世纪三十年代左右由Hartree等人提出,是量子化学计算中的基本方法。HF方法引入了单行列式近似,即$N$-电子体系的波函数由一个行列式形式的波函数表示:
#
# $$
# | \Psi \rangle = | \psi_{1} \psi_{2} \psi_{3} \dots \psi_{N} \rangle
# $$
#
# 其中$| \psi_{1} \psi_{2} \psi_{3} \dots \rangle$代表由一组自旋轨道波函数$\{ \pi_{i} \}$构成的N阶行列式。
# 自旋轨道波函数$\psi_{i}$可进一步用一组形式已知的基函数展开:
#
# $$\psi_{i} = \phi_{i} \eta_{i}$$
# $$\phi_{i} = \sum_{\mu}{C_{\mu i} \chi_{\mu}}$$
#
# 其中$\{\chi_{\mu}\}$被称为基函数,可以是高斯函数等。
# 该近似考虑了电子间的交换作用,但是忽略了电子间的关联作用,故无法正确计算如解离能等性质。
#
# HF方法的改进可以从波函数展开定理出发。波函数展开定理可以表述为,若$\{ \psi_{i} \}$是一组完备的自旋轨道波函数,则$N$-电子体系波函数可以由$\{ \psi_{i} \}$构成的行列式波函数精确展开:
#
# $$
# | \Psi \rangle = \sum^{\infty} _ {i_{1} < i_{2} < \dots < i_{N}} {C_{i_{1} i_{2} \dots i_{N}} | \psi_{i_{1}} \psi_{i_{2}} \dots \psi_{i_{N}} \rangle}
# $$
#
# 由此可得到Configuration Interaction(CI)方法:
#
# $$
# | \Psi_{CI} \rangle = C_{0} | \Psi_{HF} \rangle + \sum^{a\rightarrow\infty} _{i\in occ\\\\a\not\in occ}{C^{a} _{i} | \Psi^{a} _{i} \rangle } + \sum^{ab\rightarrow\infty} _{ij\in occ\\\\ab\not\in occ}{C^{ab} _{ij} | \Psi^{ab} _{ij} \rangle }
# $$
#
# 上式中的$| \Psi^{a}_{i} \rangle + \dots$代表电子由轨道$i$激发到轨道$a$的单激发波函数,以此类推。只考虑单激发和双激发的CI被称为CISD,即Configuration Interaction with singles and doubles。将基态HF波函数一直到N激发波函数全部考虑在内的Configuration Interaction被称为Full Configuration Interaction(FCI),FCI波函数是定态薛定谔方程在给定基函数下的精确解。
#
# ### 二次量子化
#
# 在二次量子化表述下,体系的哈密顿量具有如下形式:
#
# $$
# \hat{H} = \sum_{p, q}{h^{p} _ {q} E^{p} _ {q}} + \sum_{p, q, r, s}{\frac{1}{2} g^{pq} _ {rs} E^{pq} _ {rs} }
# $$
#
# 其中$E^{p} _ {q}$和$E^{pq} _ {rs}$分别为:
#
# $$
# E^{p} _ {q} = a^{\dagger} _ {p} a_{q}
# $$
# $$
# E^{pq} _ {rs} = a^{\dagger} _ {p} a^{\dagger} _ {q} a_{r} a_{s}
# $$
#
# $a^{\dagger} _ {p}$和$a _ {q}$分别为产生算符(Creation Operator)和湮灭算符(Annihilation Operator)。
#
# 使用二次量子化的表述方法,可以非常方便地表示激发态波函数:
#
# $$
# | \Psi^{abc\dots} _ {ijk\dots} \rangle = a^{\dagger} _ {a} a^{\dagger} _ {b} a^{\dagger} _ {c} \dots a_{i} a_{j} a_{k} \dots | \Psi \rangle
# $$
#
# CI方法的一个改进是耦合簇理论(Coupled-Cluster theory, CC)。CC引入指数化算符:
#
# $$
# | \Psi_{CC} \rangle = \exp{(\hat{T})} | \Psi_{HF} \rangle
# $$
#
# 其中耦合簇算符$\hat{T}$为对激发算符的求和:
#
# $$
# \hat{T} = \sum_{p\not\in occ\\\\q\in occ}{\theta^{p} _ {q} E^{p} _ {q}} + \sum_{pq\not\in occ\\\\rs\in occ}{\theta^{pq} _ {rs} E^{pq} _ {rs}} + \dots
# $$
#
# 其中$\theta$和CI方法中的$C$类似,是待求解的参数。由指数的泰勒展开易知,即使耦合簇算符$\hat{T}$中只包含低阶激发项,$\exp{(\hat{T})}$也可以隐含部分高阶激发,这也使得CC方法向FCI波函数收敛的速度要远快于CI,同样截断到K激发,如K=2,CCSD的精度会超过CISD。
#
#
# <!--
# 一般而言,若一个方法可以达到化学精度,即由此方法计算的能量和FCI能量之间的差值小于1 kcal/mol,则认为这个方法具有良好的精度,截断到三激发的CCSD(T)在大部分情况下都能符合这个标准
# -->
#
# 电子关联作用的效果是使得总能量降低,故HF得到的基态能量会略高于CCSD和FCI。另外,从上述理论不难发现,FCI的计算量远大于CCSD和HF。我们使用openfermion封装的`MolecularData`和openfermionpyscf封装的`run_pyscf`函数来进行演示:
# +
molecule_of = MolecularData(
geometry,
basis,
multiplicity=2 * spin + 1
)
molecule_of = run_pyscf(
molecule_of,
run_scf=1,
run_ccsd=1,
run_fci=1
)
print("Hartree-Fock energy: %20.16f Ha" % (molecule_of.hf_energy))
print("CCSD energy: %20.16f Ha" % (molecule_of.ccsd_energy))
print("FCI energy: %20.16f Ha" % (molecule_of.fci_energy))
# -
# 在上面的例子中,我们运行了Hartree-Fock(HF)、CCSD、FCI进行总能量的计算。若对运行时间进行统计,会发现$T_{HF}<T_{CCSD}\ll T_{FCI}$,换成计算量更大的体系如乙烯分子等会更明显一些。此外,对于计算得到的总能量,有$E_{HF}>E_{CCSD}>E_{FCI}$。计算完成后,我们将结果保存到`molecule_file`文件(即`molecule_of.filename`)中:
molecule_of.save()
molecule_file = molecule_of.filename
print(molecule_file)
# 量子化学计算的一大阻碍是计算量。随着体系大小(电子数、原子数)的增加,求解FCI波函数和基态能量的时间消耗大约以$2^{N}$增长,即使是较小的分子如乙烯分子等,进行FCI计算也并不容易。量子计算机的出现为此提供了一条可能的解决途径,已有的研究表明,量子计算机可以多项式的时间复杂度模拟哈密顿量的含时演化,在量子处理器上进行化学模拟相较于经典计算机有指数级的加速。本教程将介绍其中一类量子算法:量子变分求解器。
#
# ## 量子变分求解器
#
# 量子变分求解器(Variational Quantum Eigensolver, VQE)是一类量子-经典混合(Hybrid quantum-classical)算法,应用变分原理实现对基态波函数的求解。其中,变分参数的优化步在经典计算机上进行。
#
# ### 变分原理
#
# 变分原理可使用如下形式表述:
#
# $$
# E_{0} \le \frac{\langle \Psi_{t} | \hat{H} | \Psi_{t} \rangle}{\langle \Psi_{t} | \Psi_{t} \rangle}
# $$
#
# 上式中的$| \Psi_{t} \rangle$代表试探波函数。变分原理表明,在满足一定的条件下,任意试探波函数得到的基态能量总是大于等于真实的基态能量。变分原理为求解分子基态薛定谔方程提供了一种方法:使用一个参数化的函数$f(\theta)$作为精确基态波函数的近似,通过优化参数$\theta$来逼近精确的基态能量。
#
# ### 初态制备
#
# 在二次量子化表述下,$N$-电子HF波函数也具有非常简洁的形式:
#
# $$
# | \Psi_{HF} \rangle = \prod^{i=0} _{N-1}{a^{\dagger} _{i}| 0 \rangle}
# $$
#
# 上式搭建了一个由量子化学波函数到量子计算的桥梁:用$|0\rangle$代表非占据轨道,用$|1\rangle$代表电子占据的轨道,由此可以将$N$-电子HF波函数映射为由一串$M+N$个量子比特$| 00\dots 11\dots \rangle$,$M$代表非占据轨道的数量。
#
# 以下代码构造了对应于LiH分子的HF初态波函数。在Jordan-Wigner变换下,相当于将$N$个$\text{X}$门作用于$|000\dots\rangle$上。
hartreefock_wfn_circuit = Circuit([X.on(i) for i in range(molecule_of.n_electrons)])
print(hartreefock_wfn_circuit)
# 基于此,我们可以构造如下形式的试探波函数:
#
# $$
# | \Psi_{t} \rangle = U(\theta) | \Psi_{HF} \rangle
# $$
#
# 其中$U(\theta)$代表一个可通过量子线路模拟的幺正变换,$| \Psi_{HF} \rangle$作为初态,可通过多个单比特$\text{X}$门来方便地制备。$U(\theta) | \Psi_{HF} \rangle$的具体形式也被称为波函数拟设。
# ### 波函数拟设
#
# 前文提到的耦合簇理论是一个非常高效的波函数拟设。在量子计算机上使用,需要作一些修改:
#
# $$
# | \Psi_{UCC} \rangle = \exp{(\hat{T} - \hat{T}^{\dagger})} | \Psi_{HF} \rangle
# $$
#
# UCC即幺正耦合簇(Unitary Coupled-Cluster theory),$\hat{T}^{\dagger}$代表$\hat{T}$的厄米共轭。如此,$\exp{(\hat{T} - \hat{T}^{\dagger})}$即为幺正算符。[Peruzzo等人](https://doi.org/10.1038/ncomms5213)在2014年首次使用VQE结合UCCSD(Unitary coupled-cluster with singles and doubles)拟设进行了量子计算机上的化学模拟实验。值得注意的是幺正耦合簇默认了耦合簇算符中的参数$\{\theta\}$是实数。在分子体系中该假设不会有问题;在周期性体系中,[刘杰等人](https://doi.org/10.1021/acs.jctc.0c00881)的研究表明幺正耦合簇会因为忽略复数部分而造成误差。本教程暂时不讨论幺正耦合簇在周期性体系中的应用。
# 使用mindquantum的circuit模块中的`generate_uccsd`函数可读取先前保存在`molecule_file`的计算结果,“一键”构造UCCSD波函数拟设,以及其对应的量子线路:
ansatz_circuit, \
init_amplitudes, \
ansatz_parameter_names, \
hamiltonian_QubitOp, \
n_qubits, n_electrons = generate_uccsd(molecule_file, th=-1)
# `generate_uccsd`将幺正耦合簇相关的函数打包了起来,包括导出分子哈密度量、构造幺正耦合簇拟设算符、提取CCSD计算的耦合簇系数等多个步骤。该函数通过输入分子的文件路径来读取该分子,参数`th`是表示量子线路中哪些参数需要更新梯度的阈值。在[分步构造幺正耦合簇拟设](#step-by-step)章节,我们会演示如何使用mindquantum的相关接口分步完成其中包含的步骤。完整的量子线路包含HF初态+UCCSD拟设,如下代码所示:
total_circuit = hartreefock_wfn_circuit + ansatz_circuit
total_circuit.summary()
print("Number of parameters: %d" % (len(ansatz_parameter_names)))
# 对于LiH分子而言,其UCCSD波函数拟设中包含44个变分参数。该线路总共的量子比特门数量为12612,总共需要12个量子比特进行模拟。
# ### VQE的一般流程
#
# 使用VQE进行分子基态求解的一般流程如下:
#
# 1. 制备HF初态:$| 00\dots11\dots \rangle$;
# 2. 定义波函数拟设,如UCCSD等;
# 3. 将波函数拟设转化为参数化的量子线路;
# 4. 初始化变分参数,如全设为0等;
# 5. 在量子计算机上多次测量得到分子哈密顿量在该套变分参数下的能量$E(\theta)$以及能量关于参数的导数$\{ {\partial E} / {\partial \theta_{i}} \}$
# 6. 在经典计算机上使用优化算法,如梯度下降、BFGS等更新变分参数;
# 7. 将新的变分参数传入量子线路中进行更新;
# 8. 重复步骤(5)到(7),直到满足收敛标准;
# 9. 结束
#
# 在第5步中,求取能量关于参数的导数$\{ {\partial E} / {\partial \theta_{i}} \}$在量子计算机上可通过parameter-shift rule来进行,在模拟器中也可通过模拟parameter-shift rule或者有限差分法来计算,是个较为耗时的过程。mindquantum基于mindspore框架,提供了类似于机器学习的自动求导功能,可以在模拟中可以高效计算参数化量子线路的导数。以下使用mindquantum构造带自动求导功能的参数化UCCSD量子线路:
molecule_pqc = generate_pqc_operator(
["null"], ansatz_parameter_names,
RX("null").on(0) + total_circuit,
Hamiltonian(hamiltonian_QubitOp))
# 由于mindquantum需要提供两套线路(以及参数)分别作为Encoding circuit和Ansatz circuit,此处我们使用`RX("null")`作为一个Encoding circuit,在之后令参数`null`等于0将其无效化。通过将参数的具体数值传入`molecule_pqc`,即可得到对应于此变分参数的能量$E(\theta)=\langle \Psi_{UCC}(\theta) | \hat{H} | \Psi_{UCC}(\theta) \rangle$以及关于每个变分参数的导数。
# 接下来需要进行VQE优化的(5)~(7)步,即对参数化量子线路进行优化。我们可以借助MindSpore框架,使用参数化量子线路算子`molecule_pqc`构造一个神经网络模型,然后通过类似于训练神经网络的方法来优化变分参数:
# +
class PQCNet(ms.nn.Cell):
def __init__(self, pqc):
super(PQCNet, self).__init__()
self.pqc = pqc
self.weight = Parameter(initializer("Zeros",
len(self.pqc.ansatz_params_names)),
name="weight")
self.encoder_data_dummy = ms.Tensor([[0]],
self.weight.dtype)
def construct(self):
energy, _, grads = self.pqc(self.encoder_data_dummy, self.weight)
return energy
molecule_pqcnet = PQCNet(molecule_pqc)
# -
# 此处我们手动构造了一个基本的`PQCNet`作为模型示例,该模型可以和常规的机器学习模型类似使用,比如优化权重、计算导数等。更好的选择是使用mindquantum中封装的`MindQuantumAnsatzOnlyLayer`,将会在后文中进行演示。
# 构造的`PQCNet`使用`"Zeros"`关键字,将所有的变分参数初始化为0。使用CCSD(耦合簇理论)或者MP2(二阶多体微扰论)的计算结果也可以作为幺正耦合簇变分参数的初始值。此时有$E(\vec{0})=\langle \Psi_{UCC}(\vec{0}) | \hat{H} | \Psi_{UCC}(\vec{0}) \rangle = E_{HF}$:
initial_energy = molecule_pqcnet()
print("Initial energy: %20.16f" % (initial_energy.asnumpy()))
# 最后使用mindspore的Adam优化器进行优化,学习率设置为$1\times 10^{-2}$,优化终止标准设置为$\left.|\epsilon|\right. = \left.|E^{k+1} - E^{k}|\right. \le 1\times 10^{-8}$
# +
optimizer = ms.nn.Adagrad(molecule_pqcnet.trainable_params(), learning_rate=4e-2)
train_pqcnet = ms.nn.TrainOneStepCell(molecule_pqcnet, optimizer)
eps = 1.e-8
energy_diff = eps * 1000
energy_last = initial_energy.asnumpy() + energy_diff
iter_idx = 0
while (abs(energy_diff) > eps):
energy_i = train_pqcnet().asnumpy()
if iter_idx % 5 == 0:
print("Step %3d energy %20.16f" % (iter_idx, float(energy_i)))
energy_diff = energy_last - energy_i
energy_last = energy_i
iter_idx += 1
print("Optimization completed at step %3d" % (iter_idx - 1))
print("Optimized energy: %20.16f" % (energy_i))
print("Optimized amplitudes: \n", molecule_pqcnet.weight.asnumpy())
# -
# 可以看到,幺正耦合簇给出的计算结果和FCI非常接近,具有良好的精度。
# ## 分步构造幺正耦合簇拟设
#
# <a id="step-by-step"></a>
# 在上文中,我们使用了`generate_uccsd`一步构造出了幺正耦合簇拟设所需要的所有内容,此处我们将步骤拆分,分别得到我们需要的耦合簇算符、对应的量子线路以及取自于经典CCSD计算结果的变分参数初猜值。
# 首先,导入部分额外依赖,主要包含mindquantum中hiqfermion模块的相关函数:
from mindquantum.hiqfermion.transforms import Transform
from mindquantum.hiqfermion.ucc import get_qubit_hamiltonian
from mindquantum.hiqfermion.ucc import uccsd_singlet_generator, uccsd_singlet_get_packed_amplitudes
from mindquantum.circuit import TimeEvolution
from mindquantum.nn.mindquantum_ansatz_only_layer import MindQuantumAnsatzOnlyLayer
# 分子哈密顿量使用`get_qubit_hamiltonian`,读取之前的计算结果得到:
hamiltonian_QubitOp = get_qubit_hamiltonian(molecule_of)
# 对于幺正耦合簇算符$ \hat{T} - \hat{T}^{\dagger} $,可以使用`uccsd_singlet_generator`进行构造。提供总量子比特数(总自旋轨道数)和总电子数,并设置参数`anti_hermitian=True`:
ucc_fermion_ops = uccsd_singlet_generator(
molecule_of.n_qubits, molecule_of.n_electrons, anti_hermitian=True)
# 上一步构造的`ucc_fermion_ops`是参数化的。使用Jordan-Wigner变换将费米子激发算符映射为Pauli算符:
ucc_qubit_ops = Transform(ucc_fermion_ops).jordan_wigner()
# 接下来,我们需要得到幺正算符 $ \exp{(\hat{T} - \hat{T}^{\dagger})} $ 所对应的量子线路。`TimeEvolution`可生成$ \exp{(-i\hat{H}t)} $所对应的线路,其中$ \hat{H} $是一个厄米算符,$t$是实数。需要注意的是,使用`TimeEvolution`时,`ucc_qubit_ops`中已经包含了复数因子$i$,所以我们需要将`ucc_qubit_ops`除以$i$,或者提取其虚部:
ansatz_circuit = TimeEvolution(ucc_qubit_ops.imag, 1.0).circuit
ansatz_parameter_names = ansatz_circuit.para_name
# 我们使用`ansatz_parameter_names`记录该线路中的参数名。到目前为止,我们已经得到了VQE量子线路所需要内容,包括哈密顿量`hamiltonian_QubitOp`、参数化的波函数拟设线路`ansatz_circuit`,故可仿照前文,得到完整的态制备线路。其中Hartree-Fock参考态复用之前的`hartreefock_wfn_circuit`:
total_circuit = hartreefock_wfn_circuit + ansatz_circuit
total_circuit.summary()
# 下一步,需要为变分参数提供一个合理的初始值。前文构造的`PQCNet`默认使用0作为初猜,在大多数情况下是可行的。不过,使用CCSD的计算数据作为UCC的出发点,可能会有更好的结果。使用`uccsd_singlet_get_packed_amplitudes`函数从`molecule_of`提取CCSD的参数:
init_amplitudes_ccsd = uccsd_singlet_get_packed_amplitudes(
molecule_of.ccsd_single_amps, molecule_of.ccsd_double_amps, molecule_of.n_qubits, molecule_of.n_electrons)
init_amplitudes_ccsd = [init_amplitudes_ccsd[param_i] for param_i in ansatz_parameter_names]
# 使用`MindQuantumAnsatzOnlyLayer`可以方便地由参数、量子线路获得以参数化量子线路为基础的机器学习模型:
molecule_pqcnet = MindQuantumAnsatzOnlyLayer(
ansatz_parameter_names, total_circuit, Hamiltonian(hamiltonian_QubitOp.real))
# 使用`init_amplitudes_ccsd`(即CCSD计算的耦合簇系数)作为初始变分参数:
molecule_pqcnet.weight = Parameter(ms.Tensor(init_amplitudes_ccsd, molecule_pqcnet.weight.dtype))
initial_energy = molecule_pqcnet()
print("Initial energy: %20.16f" % (initial_energy.asnumpy()))
# 在这个例子中,CCSD初猜并没有提供一个更好的起点。读者可以对更多的分子、更多种类的初始值(如随机数初猜)等进行测试和探究。最后进行VQE的优化步骤,优化器依然使用Adam,收敛标准不变。优化所用的代码与前文基本一致,注意更新相应的变量即可:
# +
optimizer = ms.nn.Adagrad(molecule_pqcnet.trainable_params(), learning_rate=4e-2)
train_pqcnet = ms.nn.TrainOneStepCell(molecule_pqcnet, optimizer)
print("eps: ", eps)
energy_diff = eps * 1000
energy_last = initial_energy.asnumpy() + energy_diff
iter_idx = 0
while (abs(energy_diff) > eps):
energy_i = train_pqcnet().asnumpy()
if iter_idx % 5 == 0:
print("Step %3d energy %20.16f" % (iter_idx, float(energy_i)))
energy_diff = energy_last - energy_i
energy_last = energy_i
iter_idx += 1
print("Optimization completed at step %3d" % (iter_idx - 1))
print("Optimized energy: %20.16f" % (energy_i))
print("Optimized amplitudes: \n", molecule_pqcnet.weight.asnumpy())
# -
# ## 总结
#
# 在本案例中,我们通过两种方法,利用量子神经网络得到了LiH分子的基态能量。在第一种方法中,我们利用MindQuantum打包好的`generate_uccsd`函数生成了能够解决该问题的量子神经网络,而在第二种方法中,我们一步一步的构造出了类似的量子神经网络。最终得到的结果是一致的。
| docs/mindquantum/docs/source_zh_cn/vqe_for_quantum_chemistry.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.append("../")
import numpy as np
import logging, importlib
from decimal import Decimal
import math
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
# %matplotlib inline
logging.basicConfig(level=logging.WARNING)
# -
# ## Speed Up Plot: Full Integration
# +
# Set up N list and serial timings
Nlst = np.linspace(10,14,5)
#serialTimings = [7.7137, 7.9578, 7.9034, 8.3060, 8.8716]
serialTimings = [0.9, 2.8, 9.4, 41.2, 214.1]
# Overal timings 1: distributed arrays, no gpu
Timings1 = [114.9137, 119.5086, 123.0086, 141.3518, 211.8472]
# Overal timings 2: parfeval + gpu
Timings2 = [16.2419, 15.3659, 16.6793, 16.3852, 16.7597]
# Overal timings 3: parfeval + gpu, custom load balancing
Timings3 = [50.3076, 52.0146, 53.2892, 64.2178, 119.6148]
# Plot them boiz
fig =plt.figure(figsize=(12,6))
plt.plot(Nlst, [x/y for x,y in zip(serialTimings,Timings1)],'o-', label='dist,cpu')
plt.plot(Nlst, [x/y for x,y in zip(serialTimings,Timings2)],'o-', label='parfeval,gpu')
plt.plot(Nlst, [x/y for x,y in zip(serialTimings,Timings3)],'o-', label='parfeval,gpu,custom load blocking')
plt.xlabel('Number of Spins')
plt.ylabel('Speed Up')
plt.title('Speed Up Plot: evolve_real_system.m')
plt.legend()
plt.savefig("Speed_Up_Plot_evolve_real_system.png",bbox_inches='tight')
plt.show()
# -
# ## Speed Up Plot: Matrix Multiplication
# +
# Set up N list and serial timings
Nlst = np.linspace(10,14,5)
serialTimings = [0.2104, 2.2113, 4.7520, 11.667, 44.2324]
# Overal timings 1: distributed arrays, no gpu
Timings1 = [2.7836, 3.7334, 4.3066, 6.4626, 11.9326]
# Overal timings 2: parfeval + gpu
Timings2 = [4.7618, 5.6824, 6.8727, 10.1151, 16.7899]
# Overal timings 3: parfeval + gpu, custom load balancing
Timings3 = [1.5394, 2.4496, 3.6611, 6.6891, 16.2112]
# Plot them boiz
fig =plt.figure(figsize=(12,6))
plt.plot(Nlst, [x/y for x,y in zip(serialTimings,Timings1)],'o-', label='dist,cpu')
plt.plot(Nlst, [x/y for x,y in zip(serialTimings,Timings2)],'o-', label='parfeval,gpu')
plt.plot(Nlst, [x/y for x,y in zip(serialTimings,Timings3)],'o-', label='parfeval,gpu,custom load blocking')
plt.xlabel('Number of Spins')
plt.ylabel('Speed Up')
plt.title('Speed Up Plot: Parallelization Methods for Matrix Multiplication')
plt.legend()
plt.savefig("Speed_Up_Plot_parallelization_methods.png",bbox_inches='tight')
#plt.show()
# -
# ## Speed Up Plot: Parallelization Methods for Matrix Multiplication
| code/Plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # QRNN Model Selection
#
# This notebook performs a grid search for the best performing neural network configuration for
# quantile regression. As basic structure for the neural network a feed forward network is used and
# the following paramters are varied:
#
# - Network depth: 1 to 4 layers
# - Network width: 16, 32, 64, 128, 256, 512 neurons
# - Activation functions: linear, Sigmoid, ReLU, atan
#
# ## ipyparallel Setup
import ipyparallel as ipp
c = ipp.Client(profile='mpi')
lview = c.load_balanced_view()
# %%px
# %env KERAS_BACKEND=tensorflow
# %env OMP_NUM_THREADS=1
import matplotlib; matplotlib.use("agg")
import numpy as np
import sys
sys.path.append("/home/simonpf/src/typhon/")
from typhon.retrieval.qrnn import QRNN
# ## Model Setup
# %%px
quantiles = np.array([0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.95])
def create_model(depth, width, act_fn):
qrnn = QRNN(5, quantiles, depth, width, act_fn)
return qrnn
# ## Cross Validation
#
# Cross validation is used to determine the expected values of the quantile loss for each estimated quantile as well as the CRPS score of the estimated posterior.
# +
# %%px
from typhon.retrieval.scores import mean_quantile_score
x_train = np.load("src/atms_simulations/data/x_train_5.npy")
y_train = np.load("src/atms_simulations/data/y_train_5.npy")
def score(y_pred, y_test):
quantile_scores = mean_quantile_score(y_pred, y_test, quantiles,
convergence_epochs = 1,
learning_rate_minimum = 1e-4,
maximum_epochs = 400)
crps = QRNN.crps(y_pred, y_test, quantiles)
return np.append(quantile_scores, crps)
# -
def run_cross_validation(config):
depth, width, act_fn = config
qrnn = create_model(depth, width, act_fn)
return qrnn.cross_validation(x_train, y_train, 1.0, n_folds = 10)
# ## Running the Calculations
depths = [6, 8, 10]
widths = [8, 16, 32, 64, 96, 128, 256, 512]
act_funcs = ["linear", "tanh", "sigmoid", "relu"]
act_funcs = ["relu"]
configs = [(d, w, f) for d in depths for w in widths for f in act_funcs]
async_results = lview.map_async(run_cross_validation, configs)
results = []
for i,r in enumerate(async_results):
print(configs[i])
print("Result: " + str(r))
results += [(configs[i], r)]
# +
import numpy as np
data = np.zeros((5, 7, 4, 2))
act_indices = {"linear" : 0, "tanh" : 1, "sigmoid" : 2, "relu" : 3}
for ((n_l, n_n, act), (m, s)) in results:
data[n_l, int(np.log2(n_n)) - 3, act_indices[act], :] = np.array([m, s])
# -
np.save("data/model_selection_structure", data)
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib_settings
f, axs = plt.subplots(1, 3, figsize = (15, 5))
x = 2.0 ** np.arange(3, 10)
ax = axs[0]
for i in range(4):
m = data[i + 1, :, 1, 0]
s = data[i + 1, :, 1, 1]
ax.plot(x, m, lw = 2)
ax.fill_between(x, m + s, m - s, alpha = 0.2)
ax.set_ylim([0.88, 1.1])
ax.set_xscale("log")
ax = axs[1]
for i in range(4):
m = data[i + 1, :, 2, 0]
s = data[i + 1, :, 2, 1]
ax.plot(x, m, lw = 2)
ax.fill_between(x, m + s, m - s, alpha = 0.2)
ax.set_ylim([0.88, 1.1])
ax.set_xscale("log")
ax = axs[2]
for i in range(4):
m = data[i + 1, :, 3, 0]
s = data[i + 1, :, 3, 1]
ax.plot(x, m, lw = 2)
ax.fill_between(x, m + s, m - s, alpha = 0.2)
ax.set_ylim([0.88, 1.1])
ax.set_xscale("log")
# +
import numpy as np
res = {"linear" : np.zeros((len(depths), len(widths), 2)),
"relu" : np.zeros((len(depths), len(widths), 2)),
"tanh" : np.zeros((len(depths), len(widths), 2)),
"sigmoid" : np.zeros((len(depths), len(widths), 2))}
inds = dict(zip(widths, range(len(widths))))
for ((n_layers, width, act), (mean, std)) in results:
res[act][int(n_layers), inds[width], 0] = mean
res[act][int(n_layers), inds[width], 1] = std
# +
def print_table(res, fn = None):
s = r""
for j in range(res.shape[1]):
s += r" & $n_n = {0}$ ".format(widths[j])
s += r"\\ \hline"
for i in range(res.shape[0]):
s += "$n_h = {0}$ & ".format(i)
for j in range(res.shape[1] - 1):
s += r"${:.2} \pm {:.2}$ & ".format(res[i, j, 0], res[i, j, 1])
s += r"${:.2} \pm {:.2}$ \\ ".format(res[i, j, 0], res[i, j, 1])
s+="\hline"
if fn:
f = open(fn, "w")
f.write(s)
f.close()
else:
return s
def print_table2(res1, res2, fn = None):
s = ""
for i in range(res1.shape[0]):
s += "$n_h = i$ &"
for j in range(res1.shape[1]):
s += r"${:.2} \pm {:.2}$ & ".format(res1[i, j, 0], res1[i, j, 1])
for j in range(res2.shape[1] - 1):
s += r"${:.2} \pm {:.2}$ & ".format(res2[i, j, 0], res2[i, j, 1])
s += r"${:.2} \pm {:.2}$ \\ ".format(res2[i, j, 0], res2[i, j, 1])
if fn:
f = open(fn, "w")
f.write(s)
f.close()
else:
return s
# -
print_table(res["linear"], "tables/linear.tbl")
print_table(res["sigmoid"], "tables/sigmoid.tbl")
print_table(res["tanh"], "tables/tanh.tbl")
print_table(res["relu"], "tables/relu.tbl")
# ## Training Parameters
#
# Below the effect of training parameters on the QRNN performance is analyzed. This is again done by performing 10-fold cross-validation with varying training parameters. To save computation time the parameters are varied only independently. The following parameters are investigated:
#
# - batch size: 128, 256, 512, 1024
# - learning rate decay: 1.5, 2.0, 5.0, 10.0
# - learning rate minimum: $10^{-4},10^{-5}, 10^{-6}, 10^{-7}, 10^{-8}$
# - convergence epochs: 1, 2, 4, 8
def run_cross_validation(config):
batch_size, lr_decay, lr_minimum, convergence_epochs = config
qrnn = create_model(3, 128, "relu")
return qrnn.cross_validation(x_train, y_train, 1.0, n_folds = 10,
batch_size = batch_size,
learning_rate_decay = lr_decay,
learning_rate_minimum = lr_minimum,
convergence_epochs = convergence_epochs)
configs = []
configs += [(bs, 2.0, 1e-6, 2) for bs in [128, 256, 512, 1024]]
configs += [(256, lrd, 1e-6, 2) for lrd in [1.5, 2.0, 5.0, 10.0]]
configs += [(256, 2.0, 10 ** -lrm, 2) for lrm in [4, 5, 6, 7, 8]]
configs += [(256, 2.0, 1e-6, ce) for ce in [1, 2, 4, 8]]
async_results = lview.map_async(run_cross_validation, configs)
results = []
for i,r in enumerate(async_results):
print(configs[i])
print("Result: " + str(r))
results += [(configs[i], r)]
configs = []
configs += [(64, lrd, 1e-6, 2) for lrd in [1.2, 1.5, 2.0]]
configs += [(64, 2.0, 10 ** -lrm, 2) for lrm in [3, 4, 5, 6]]
configs += [(64, 2.0, 1e-6, ce) for ce in [1, 2, 4, 8]]
async_results = lview.map_async(run_cross_validation, configs)
results = []
for i,r in enumerate(async_results):
print(configs[i])
print("Result: " + str(r))
results += [(configs[i], r)]
| qrnn_model_selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Masked Language Modeling
#
# In this lab, we will overview the **masked language modeling** objective, and how the **Transformer** architecture is used for large-scale masked language modeling.
#
# +
# %pylab inline
import os, sys, glob, json, math
import pandas as pd
from tqdm import tqdm
from pprint import pprint
from collections import defaultdict
import torch
import torch.nn as nn
# %load_ext autoreload
# %autoreload 2
pd.set_option('display.max_colwidth', -1)
# -
# Let set the random seeds for reproducability.
# +
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
# -
# ## Background
#
# Recently, Devlin et al. published [BERT: Pre-training of Deep Bidirectional Transformers for
# Language Understanding](https://arxiv.org/pdf/1810.04805.pdf).
#
#
# **B**idirectional
#
# **E**ncoder
#
# **R**epresentations from
#
# **T**ransformers
#
#
# #### Goal:
# 1. **pre-train** a model that produces language representations.
# 2. **fine-tune** the model on a task.
#
#
# ## Masked Language Model Objective
#
# Randomly mask some of the tokens from the input, predict original vocabulary id of each masked token.
#
# - Given sequence $x_1,\ldots,x_N$.
#
# - Form **mask** $m_1,\ldots,m_N$ where $m_i\in \{0,1\}$.
# - E.g. $m_i=1$ with probability 0.15
#
# - Form **masked sequence** $\tilde{x}_1,\ldots,\tilde{x}_N$.
# - $\tilde{x}_i=\begin{cases} x_i & m_i=0\\ \texttt{[MASK]} & m_i=1\end{cases}$
#
#
# #### $$\mathcal{L}_{\text{MLM}}=-\sum_{\underbrace{i | m_i=1}_{\text{MASKED POSITIONS}}}\log p_{\theta}(\underbrace{x_i}_{\text{TRUE TOKEN}}|\underbrace{\tilde{x}_1,\ldots,\tilde{x}_N}_{\text{MASKED SEQUENCE}})$$
#
#
# <!-- Below, we will discuss the exact form of $\tilde{x}_i$ that the BERT authors used. -->
#
#
# <!-- #### Diagram of BERT Implementation -->
# <!--  -->
# ## Transformers
#
# So far we have modeled a sequence by factorizing the joint distribution into conditionals, and **parameterizing each conditional with a recurrent network**:
#
#
# #### $$p_{\theta}(x_1,\ldots,x_T)=\prod_{t=1}^T p_{\theta}(x_t | x_{<t})$$
# \begin{align}
# h_t &= RNN(x_{t-1}, h_t)\\
# p_{\theta}(x_t | x_{<t}) &=\text{softmax}\left(Wh_t+b\right),
# \end{align}
#
# where $\theta$ are the model parameters (RNN parameters, $W, b$, embedding matrix).
#
#
# #### Alternative
#
# An alternative proposed in [[Vaswani et al 2017](https://arxiv.org/pdf/1706.03762.pdf)] is to parameterize each conditional with a **particular feed-forward architecture** called the **Transformer**. With this model, it is possible to compute all conditionals with a **single feed-forward pass**:
# \begin{align}
# (h_1,\ldots,h_T) &= Transformer(x)\\
# p_{\theta}(x_t | x_{<t}) &= \text{softmax}\left(Wh_t + b\right)
# \end{align}
#
# We will discuss briefly the key ideas, the overall **Transformer architecture (encoder only)**, and how they are used in Pytorch.
# ### High-Level View
#
# We can view the Transformer encoder as mapping a sequence to a sequence of vectors.
#
# <img src="img/high1.png" alt="Drawing" style="width: 35%;"/>
# Let's step through the key ideas of how this mapping is designed, and discuss some of its resulting properties.
# ### Key Idea 1: Position Embeddings
#
# **Unlike RNNs which can learn positional information via the hidden state over time, the Transformer has no notion of time**.
#
# Thus we encode inputs with **position** as well as **token** embeddings:
#
# <img src="img/high2.png" alt="Drawing" style="width: 35%;"/>
# +
input_sequence = ['<s>', 'my', 'pet', '[M]', '<s>']
max_len = 10
vocab = {'<s>': 0, 'my': 1, 'pet': 2, 'dog': 3, 'cat': 4, 'lion': 5, '[M]': 6}
dim = 6
token_embed = nn.Embedding(len(vocab), embedding_dim=dim) # an embedding for each token
position_embed = nn.Embedding(max_len, embedding_dim=dim) # an empbedding to count for the position of the token
# +
input_vector = torch.tensor([vocab[x] for x in input_sequence]).unsqueeze(1) # get the numerical representation of the token
input_embeddings = token_embed(input_vector) + position_embed(torch.arange(len(input_vector))).unsqueeze(1) # add the input embedding to the position embedding
input_embeddings.size()
# -
# **Warning!!** The pytorch Transformer classes accept input as `Length x Batch x Dim`
# #### Key Idea 2: Modularity
# The Transformer (encoder) is composed of a stack of **N identical layers**.
#
# <img src="img/layers.png" alt="Drawing" style="width: 35%;"/>
# +
import torch.nn as nn
# nn.TransformerEncoder?
# -
# #### The `forward` passes the input through the N layers, then normalizes it:
#
# **Warning!!** The forward function accepts input as `Length x Batch x Dim`
# +
# # nn.TransformerEncoder.forward??
# +
encoder_layer = nn.TransformerEncoderLayer(d_model=dim, nhead=2, dim_feedforward=64, dropout=0.1) # attention is all you need config nn.TransformerEncoderLayer(d_model=512, nhead=8)
encoder = nn.TransformerEncoder(encoder_layer, num_layers=4)
# +
outputs = encoder(input_embeddings)
print("input size: \t%s" % str(tuple(input_embeddings.shape)))
print("output size:\t%s" % str(tuple(outputs.shape))) # contextualize embedding of each token
outputs
# -
print(token_embed(input_vector)) # the original token embedding
# #### Each layer has two parts, **self-attention** and a feed-forward transformation:
#
# <img src="img/layer.png" alt="Drawing" style="width: 65%;"/>
# +
# # nn.TransformerEncoderLayer??
# +
# # nn.TransformerEncoderLayer.forward??
# -
# ### Key Idea 3: Self-Attention
#
# In the RNN, the hidden state contains information about previous tokens.
# The Transformer instead performs **attention** over all inputs at a given layer. 'Attention' computes an output vector by taking a weighted sum of input vectors. The weights are 'attention weights'. The Transformer uses **scaled dot-product attention**:
# #### $$\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V$$
#
# and 'Multi-head Attention' refers to applying several of these operations in parallel.
#
# #### *Key Property*: Each output vector of a layer $n$ can using information from **all** inputs to the layer $n$.
#
# Thus each **final output vector** can incorporate information from **all input words**.
#
# (If we want to prevent information flow such as in left-to-right language modeling, we can use masking).
# +
# nn.MultiheadAttention?
# +
attn = nn.MultiheadAttention(dim, 2, dropout=0.0)
attn_outputs, attn_weights = attn.forward(query=outputs, key=outputs, value=outputs)
print("input shape: %s" % (str(tuple(outputs.size()))))
print("output shape: %s" % (str(tuple(attn_outputs.size()))))
print(outputs)
print("\nattn weights shape: %s" % (str(tuple(attn_weights.size()))))
print(attn_weights)
# -
# #### Summary
class Transformer(nn.Module):
def __init__(self, vocab_size, max_len, dim=8, num_layers=4, nhead=2):
super().__init__()
self.token_embed = nn.Embedding(vocab_size, dim)
self.position_embed = nn.Embedding(max_len, dim)
encoder_layer = nn.TransformerEncoderLayer(d_model=dim, nhead=nhead, dim_feedforward=64, dropout=0.0)
self.encoder = nn.TransformerEncoder(encoder_layer, num_layers=num_layers)
self.projection = nn.Linear(dim, vocab_size)
def features(self, token_indices):
pos = torch.arange(len(token_indices), device=token_indices.device).unsqueeze(1)
x = self.token_embed(token_indices) + self.position_embed(pos)
x = self.encoder(x)
return x
def forward(self, token_indices):
x = self.features(token_indices)
x = self.projection(x)
return x
input_vector.size()
# +
model = Transformer(len(vocab), max_len=100)
model.features(input_vector)
# -
model.features(input_vector).shape
model
model.token_embed(input_vector)
# ## Back to Masked Language Modeling
#
# Recall the **key property** of Transformers: due to self-attention, each output vector can incorporate information from *all* input tokens.
#
# <img src="img/mlm.png" alt="Drawing" style="width: 45%;"/>
#
# This is useful for masked language modeling, where we want to use information from the entire context when predicting the masked token(s).
# #### MLM on Persona-Chat
import utils
raw_datasets, datasets, vocab = utils.load_personachat()
# +
from torch.utils.data.dataloader import DataLoader
trainloader = DataLoader(datasets['train'], batch_size=4, collate_fn=lambda x: utils.pad_collate_fn(vocab.get_id('<pad>'), x))
validloader = DataLoader(datasets['valid'], batch_size=4, collate_fn=lambda x: utils.pad_collate_fn(vocab.get_id('<pad>'), x))
# -
batch = next(trainloader.__iter__())
batch
def mask_tokens(inputs, mask_prob, pad_token_id, mask_token_id, vsize):
""" Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original."""
inputs = inputs.clone()
labels = inputs.clone()
# Sample tokens in each sequence for masked-LM training
masked_indices = torch.bernoulli(torch.full(labels.shape, mask_prob)).bool()
masked_indices = masked_indices & (inputs != pad_token_id)
labels[~masked_indices] = -1 # We only compute loss on masked tokens
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = mask_token_id
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(vsize, labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels
inputs, labels = mask_tokens(batch, mask_prob=0.15, mask_token_id=vocab.get_id('[M]'), pad_token_id=vocab.get_id('<pad>'), vsize=len(vocab))
print("Mask token id: %d" % vocab.get_id('[M]'))
inputs
labels
model = Transformer(len(vocab), max_len=200)
logits = model(inputs)
logits.size()
labels.size()
criterion = nn.CrossEntropyLoss(ignore_index=-1)
# +
logits_ = logits.view(-1, logits.size(2))
labels_ = labels.view(-1)
criterion(logits_, labels_)
# -
if False:
import torch.optim as optim
from tqdm import tqdm, trange
from collections import defaultdict
from torch.utils.data.dataloader import DataLoader
trainloader = DataLoader(datasets['train'], batch_size=64, collate_fn=lambda x: utils.pad_collate_fn(vocab.get_id('<pad>'), x))
validloader = DataLoader(datasets['valid'], batch_size=64, collate_fn=lambda x: utils.pad_collate_fn(vocab.get_id('<pad>'), x))
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
model = Transformer(len(vocab), max_len=65, dim=256, nhead=8).to(device)
model_parameters = [p for p in model.parameters() if p.requires_grad]
optimizer = optim.Adam(model_parameters, lr=0.001)
criterion = nn.CrossEntropyLoss(ignore_index=-1).to(device)
stats = defaultdict(list)
for epoch in range(50):
for step, batch in enumerate(trainloader):
model.train()
# Mask the batch
inputs, labels = mask_tokens(batch, mask_prob=0.15,
pad_token_id=vocab.get_id('<pad>'),
mask_token_id=vocab.get_id('[M]'),
vsize=len(vocab))
inputs = inputs.to(device)
labels = labels.to(device)
logits = model(inputs)
logits_ = logits.view(-1, logits.size(2))
labels_ = labels.view(-1)
optimizer.zero_grad()
loss = criterion(logits_, labels_)
loss.backward()
optimizer.step()
stats['train_loss'].append(loss.item())
stats['train_loss_log'].append(loss.item())
if (step % 500) == 0:
avg_loss = sum(stats['train_loss_log']) / len(stats['train_loss_log'])
print("Epoch %d Step %d\tTrain Loss %.3f" % (epoch, step, avg_loss))
stats['train_loss_log'] = []
for batch in validloader:
model.eval()
with torch.no_grad():
# Mask the batch
inputs, labels = mask_tokens(batch, mask_prob=0.15,
pad_token_id=vocab.get_id('<pad>'),
mask_token_id=vocab.get_id('[M]'),
vsize=len(vocab))
inputs = inputs.to(device)
labels = labels.to(device)
logits = model(inputs)
logits_ = logits.view(-1, logits.size(2))
labels_ = labels.view(-1)
loss = criterion(logits_, labels_)
stats['valid_loss'].append(loss.item())
print("=== Epoch %d\tValid Loss %.3f" % (epoch, stats['valid_loss'][-1]))
# ### Example Conditionals
# #### Load model
# +
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
checkpoint = utils.load('model', 'model', best=True)
options = checkpoint['options']
stats = checkpoint['stats']
model = utils.Transformer(len(vocab), options['max_len'],
dim=options['dim'],
nhead=options['nhead'])
model.load_state_dict(checkpoint['model_dict'])
# -
model.eval()
model = model.to(device)
# +
sentences = [['<s>', 'i', 'have', 'a', 'pet', '[M]', '.', '<s>'],
['<s>', 'i', 'have', 'two', 'pet', '[M]', '.', '<s>'],
['<s>', 'my', '[M]', 'is', 'a', 'lawyer', '.', '<s>'],
['<s>', 'my', '[M]', 'is', 'a', '[M]', '.', '<s>'],
['<s>', 'i', '[M]', '[M]', '[M]', 'sometimes', '.' , '<s>']]
def get_top_masked_tokens(tokens, vocab, device, top=10):
ids = torch.tensor([vocab.get_id(x) for x in tokens], device=device).unsqueeze(1)
masked = ids == vocab.get_id('[M]')
logits = model(ids)[masked]
probs = torch.softmax(logits, -1)
print(' '.join(tokens))
for ps in probs:
probs, idxs = ps.sort(descending=True)
for i in range(top):
print("\t%s (%.4f)" % (vocab.get_token(idxs[i].item()),
probs[i].item()))
print()
# -
for s in sentences:
get_top_masked_tokens(s, vocab, device)
# ## Back to *BERT*
#
# **B**idirectional
#
# **E**ncoder
#
# **R**epresentations from
#
# **T**ransformers
# #### - Masked Language Modeling at scale
#
# #### - Learned representations are useful downstream
#
# <img src="img/bert_citations.png" alt="Drawing" style="width: 45%;"/>
# #### Great implementation in [transformers](https://github.com/huggingface/transformers):
# +
# # !pip install transformers
# -
import torch
from transformers import (
BertForMaskedLM,
BertTokenizer
)
# ### Details -- Model Variants
# - $\text{BERT}_{\text{BASE}}$: 12 layers, hidden dimension 768, 12 attention heads (**110 million parameters**)
# - $\text{BERT}_{\text{LARGE}}$: 24 layers, hidden dimension 1024, 16 attention heads (**340 million parameters**)
# +
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
model = BertForMaskedLM.from_pretrained('bert-base-cased', output_attentions=True)
if torch.cuda.is_available():
model.cuda()
# -
# ### Details -- Input Implementation
#
#
# - `[CLS]` token: starts each sequence. Used as aggregate sequence representation.
# - `[SEP]` token: separates two segments (e.g. two sentences).
# - **Segment embedding**: learned embedding for every token indicating whether it belongs
# to sentence A or sentence B.
# - **Position embedding**: learned.
#
#
# <img src="img/bert_inputs.png" alt="Drawing" style="width: 75%;"/>
#
# **Exercise:** Which downstream tasks would two sequences be useful for?
# ### Tokenization
#
# #### BERT represents text using **subword** tokens with a 30k token vocabulary.
#
#
#
# (more info [here](https://github.com/google/sentencepiece) and in the papers mentioned there)
#
# <!-- - **Token embedding**: WordPiece embeddings with 30k token vocabulary. -->
tokenizer.tokenize("Pretraining is cool.")
tokenizer.tokenize("BERT represents text using subwords.")
# + active=""
#
# -
# ### Examining Learned Conditionals (& Representations)
#
# **Probing tasks** can be used to examine aspects of what the model has learned.
#
# Following [Petroni et al 2019](https://arxiv.org/pdf/1909.01066.pdf) we probe for '**knowledge**' that the model has learned by querying for masked out objects, e.g.:
#
# <img src="img/bert_kb.png" alt="Drawing" style="width: 75%;"/>
#
# The task also illustrates some aspects of the **conditional distributions** and **contextualized representations** that the model has learned.
#
# (image from [Petroni et al 2019])
#
#
# **Exercise:** The authors only consider *single-token* prediction. Why?
# #### Probing Task
#
# We use a dataset from [Petroni et al 2019](https://github.com/facebookresearch/LAMA).
import utils
data = utils.load_lama_squad(download=True)
data[0]
# +
results = []
model.eval()
for example in tqdm(data, total=len(data)):
sentence, label = example['masked_sentences'][0], example['obj_label']
inp = torch.tensor([
[tokenizer.cls_token_id] +
tokenizer.encode(sentence) +
[tokenizer.sep_token_id]
], device=device)
mask = (inp == tokenizer.vocab[tokenizer.mask_token])
out, attn = model(inp)
probs, token_ids = out[mask].softmax(1).topk(10)
probs = probs[0].tolist()
token_ids = token_ids[0].tolist()
tokens = [tokenizer.ids_to_tokens[i] for i in token_ids]
results.append({
'sentence': sentence,
'label': label,
'top_tokens': tokens,
'top_probs': probs,
'correct@1': tokens[0] == label,
'attn': attn
})
print("correct@1: %.3f" % (
len([r for r in results if r['correct@1']]) / len(results)
))
# +
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
correct = [r for r in results if r['correct@1']]
wrong = [r for r in results if not r['correct@1']]
def show(idx=0, attn_layer=0, is_correct=True):
result = correct[idx] if is_correct else wrong[idx]
# --- format the result into a string
top_str = '\n\t'.join([
('\t%s\t(%.4f)' % (tokens, probs))
for tokens, probs in zip(result['top_tokens'], result['top_probs'])
])
print("%s\n\tlabel:\t%s\n\n\ttop:%s" % (
result['sentence'],
result['label'],
top_str
))
# --- visualize attention
print("Attention weights (12 heads) from layer %d:" % attn_layer)
fig, axs = plt.subplots(3, 4, figsize=(18, 12))
toks = ['[CLS]'] + tokenizer.tokenize(result['sentence']) + ['[SEP]']
for i, ax in enumerate(axs.reshape(-1)):
ax.matshow(result['attn'][attn_layer][0][i].data.cpu().numpy(), cmap='gray')
ax.set_xticks(range(len(toks)))
ax.set_xticklabels(toks, rotation=90, fontsize=15)
ax.set_yticks(range(len(toks)))
ax.set_yticklabels(toks, fontsize=15)
plt.tight_layout()
interactive(
show,
idx=(0, min(len(correct), len(wrong))-1),
attn_layer=range(12),
is_correct=True
)
| Part 02/003_Decoding/mask_lm_lab/masked_lms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="bOy0Yzvxko84"
from nltk.corpus import stopwords
import os
import re
import pandas as pd
import numpy as np
import time
# from google.colab import files as filess # only uncomment hosting on google colab
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 71} colab_type="code" executionInfo={"elapsed": 897, "status": "ok", "timestamp": 1527716027361, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="w-5j-1Aako89" outputId="6930ff1f-e3a3-474a-acfe-797663397454"
import nltk
nltk.download('stopwords')
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 3045, "status": "ok", "timestamp": 1527716031421, "user": {"displayName": "<NAME>RA", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="BSrm1Dw_ko9E" outputId="b30d2564-80f5-4206-8668-c7f86bf15cb0"
import urllib.request
urllib.request.urlretrieve ("https://archive.ics.uci.edu/ml/machine-learning-databases/20newsgroups-mld/20_newsgroups.tar.gz", "a.tar.gz")
urllib.request.urlretrieve ("http://archive.ics.uci.edu/ml/machine-learning-databases/20newsgroups-mld/mini_newsgroups.tar.gz", "b.tar.gz")
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="mZdjZLIEko9J"
import tarfile
tar = tarfile.open("a.tar.gz")
tar2 = tarfile.open("b.tar.gz")
tar.extractall()
tar2.extractall()
tar.close()
tar2.close()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 961, "status": "ok", "timestamp": 1527716046959, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="6Y5q55a9ko9Q" outputId="1bf4f723-7bb9-482c-ca26-173fb74adff3"
stop_words = set(stopwords.words('english'))
block_wrds = ['sender:','subject:','writes:','references:','organization:','from:','date:','>i','22','|>','>>','reply-to:','xref:','newsgroups:','>in','>the','message-id:','lines:','path:','re:','--','sender:','last','better','never','every','even','two','good','used','first','need','going','must','really','might','well','without','made','give','look','try','far','less','seem','new','make','many','way','since','using','take','help','thanks','send','free','may','see','much','want','find','would','one','like','get','use','also','could','say','us','go','please','said','set','got','sure','come','lot','seems','able','anything','put']
"writes" in block_wrds
# + [markdown] colab_type="text" id="fl967rpjko9W"
# strip the documents and take into dictionary after the header only!
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="XsiIOY21ko9X"
dictionary = {}
type(dictionary)
count=0
for file in os.listdir("mini_newsgroups"): # making the features list by finding the frequency of each word in the docs
for files in os.listdir("mini_newsgroups/"+file):
#print(file,files)
f = open("mini_newsgroups/"+file+"/"+files,'r',errors='ignore')
message = f.read()
message = message.split()
k =1
for i in message:
count +=1
if(len(i) > 1):
if not i.lower() in stop_words:
if not i.lower() in block_wrds:
if(i.lower() in dictionary.keys()):
dictionary[i.lower()] = dictionary[i.lower()] +1
else:
dictionary[i.lower()] = 1
f.close()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="7DqLUmzVko9a"
import operator
sorted_vocab = sorted(dictionary.items(), key= operator.itemgetter(1), reverse= True) # sort the vocab based on frequency
# sorted_vocab
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="z-5xaivUko9f"
top_val = []
sorted_vocab[1000][1]
size = len(sorted_vocab)
for i in range(size):
if(sorted_vocab[1000][1] <= sorted_vocab[i][1]):
top_val.append(sorted_vocab[i][0])
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 1817} colab_type="code" executionInfo={"elapsed": 982, "status": "ok", "timestamp": 1527716063652, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="9XKbg8Zmko9k" outputId="23eb4c68-3a4e-4f1a-8695-d8e97ef341c2"
top_val[0:100]
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 1121, "status": "ok", "timestamp": 1527716075948, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="NKN-U6wFko9q" outputId="26e5ca40-0504-46c9-b7af-3da648bfba1b"
df = pd.DataFrame(columns = top_val)
df.columns
start_time = time.time()
start_time
# + cellView="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 811542, "status": "ok", "timestamp": 1527716889373, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="ijr8ig6Pko9u" outputId="1b365984-6a80-4479-99c9-5b5821b95c1f"
#@ use to make the
df = pd.DataFrame(columns = top_val)
df.columns
count=0
for file in os.listdir("20_newsgroups"): # making the dictionary in which top 1000 words count are there
for files in os.listdir("20_newsgroups/"+file):
count=count+1
#print(file,files)
df.loc[len(df)] = np.zeros(len(top_val))
f = open("20_newsgroups/"+file+"/"+files,'r',errors='ignore')
message = f.read()
message = message.split()
k =0
for i in message:
if(i.lower() in df.columns):
df[i.lower()][len(df)-1] += 1
f.close()
count
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 876, "status": "ok", "timestamp": 1527717217805, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="hiEEemn9ko90" outputId="7ac91caa-e3a8-4aa5-c5bc-3319c7b718de"
df.shape
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="wLOg4Tm6ko95"
y=[]
i=0
count=0
for file in os.listdir("20_newsgroups"):
for files in os.listdir("20_newsgroups/"+file):
#print(file,files)
count+=1
y.append(i)
i=i+1
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 1886, "status": "ok", "timestamp": 1527717240615, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="r--3MW5bko9-" outputId="5a3539e4-9f9f-45c7-9979-501b1f2d08e0"
y = np.array(y)
y.shape,df.shape
x = df.values
count
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="uXN1658wko-D"
from sklearn import model_selection
x_train, x_test, y_train, y_test = model_selection.train_test_split(x, y, test_size = 0.25, random_state = 0)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 1084, "status": "ok", "timestamp": 1527717246839, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="1AaYbzniko-H" outputId="65b7094f-aaf1-47d1-e8f6-a21a3c6844d3"
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB()
clf.fit(x_train, y_train)
y_pred = clf.predict(x_test)
train_score = clf.score(x_train, y_train)
test_score = clf.score(x_test, y_test)
train_score, test_score
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 18093, "status": "ok", "timestamp": 1527717271523, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="jysYIi8wko-K" outputId="70f8c2fe-2dc0-4073-bd34-4e3d3bd964cf"
newData = df
newData['out'] = y
newData.to_csv("textClassification.csv")
end_time = time.time()
total_time = end_time - start_time
total_time
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="WgGRXdwhjcff"
filess.download('textClassification.csv') #colaboratory notebook stuff ignore!
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="hXmOm3U_2ShU"
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# + [markdown] colab_type="text" id="nSw8kiTY4YhS"
# ## these are the codes for connecting && authenticating the GDrive for getting the CSV file into .pynb file
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 125} colab_type="code" executionInfo={"elapsed": 1134, "status": "ok", "timestamp": 1527717424552, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="GafJsc4K4Kc9" outputId="344dac92-0f21-49a9-a3eb-612d773a1bed"
file_list = drive.ListFile({'q': "'1vrFHzEYTZkF_vJvn7KYh6OhYLphLvzsX' in parents and trashed=false"}).GetList()
for file1 in file_list:
print('title: %s, id: %s' % (file1['title'], file1['id']))
# + [markdown] colab_type="text" id="HHumiRgC4nyX"
# ## main code for Multinomial Implementation starts from below
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="mz7HhKBP42Sc"
import pandas as pd
import numpy as np
import time
from google.colab import files as filess
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="3RXCqVo54_5b"
data = drive.CreateFile({'id': '1lQPcPwbGsxvrtWi5vl4PnQS5VbC-VCoX'})
data.GetContentFile('textClassification.csv')
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 4696, "status": "ok", "timestamp": 1527718210761, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="P2Oyr5Y75KLY" outputId="9dc0a145-2108-4824-9309-203a5626a2c5"
data = pd.read_csv("textClassification.csv")
Y = data["out"]
print(data.shape)
data.drop(['out'], axis = 1, inplace = True)
data.drop(['Unnamed: 0'], axis = 1, inplace = True)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 18035} colab_type="code" executionInfo={"elapsed": 835, "status": "ok", "timestamp": 1527718216247, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="9YrTnWGp5OvQ" outputId="0173cd62-79b1-432f-a201-a9e477589e94"
from sklearn import model_selection
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(data, Y, test_size = 0.30,shuffle=True, random_state = 0)
f_list = list(data)
f_list
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="56AlSmEM5P3V"
def train(X_train,Y_train):
result = {}
set_class = set(Y_train)
for curr_class in set_class:
result[curr_class] = {}
result["total_data"] = len(Y_train)
#all the x_train rows whose Y is curr_class
curr_class_rows = (Y_train == curr_class)
X_train_curr = X_train[curr_class_rows]
Y_train_curr = Y_train[curr_class_rows]
#traverse through all the features of X_train and get the sum of each word and save it in the dict
#i.e result[class][word] = count of word appearance in the doc
sums = 0
for x in f_list:
result[curr_class][x] = X_train_curr[x].sum()
sums = sums+result[curr_class][x]
result[curr_class]["total_count"] = sums
return result
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 4822, "status": "ok", "timestamp": 1527718234927, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="PBSbDU9e5S5K" outputId="61ca4358-6015-43d6-ecfa-87215ddcb9f8"
dictionary = train(X_train,Y_train)
len(dictionary[0]),len(f_list)`
# -
# in below function i have to pass the current class(selected) and the word which is passed and find the probability accordingly
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="chPBCY9p5eRB"
def probablity(dictionary,doc,current_class):
output= np.log(dictionary[current_class]["total_count"])-np.log(dictionary["total_data"]) # P(wrd_cunt_curr_clss/total_words)
# print("output1",output)
for index,count in doc.iteritems():
f_name = index
f_count = count
freq_count = dictionary[current_class][f_name] + 1
total_count = dictionary[current_class]['total_count'] + len(f_list) #handling 0 probablity
curr_prob = np.log(freq_count) - np.log(total_count)
for i in range(int(count)):
output = output + curr_prob
return output
# -
# in below function we have to take the row or Document whs class we have have to predict and compare with all the class one by one
#
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="bBwLJjt75kJz"
def predictSinglePoint(dictionary,doc):
classes = dictionary.keys()
best_p = -1000
best_class = -1
first_run = True
for current_class in classes: # comapare each classes and find the best one
if(current_class == "total_data"):
continue
p_curr_class = probablity(dictionary,doc,current_class)
# print(current_class," ",p_curr_class)
if(first_run or p_curr_class > best_p):
best_p = p_curr_class
best_class = current_class
first_run = False
return best_class
# -
# we have to pass a single row !! to predictsingle point
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="Ll5wtS535mn9"
def predict(dictionary,X_test):
Y_pred = []
for j in X_test.iterrows():
x_class = predictSinglePoint(dictionary,j[1]) # pass each document (row) to the predictSinglept function
Y_pred.append(x_class)
# break
return Y_pred
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="IQU9vB-l5seT"
Y_pred = predict(dictionary,X_test)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 931, "status": "ok", "timestamp": 1527718924294, "user": {"displayName": "<NAME>", "photoUrl": "//lh3.googleusercontent.com/-nA_l2grZDGM/AAAAAAAAAAI/AAAAAAAACVM/-QI2KTqKvUQ/s50-c-k-no/photo.jpg", "userId": "108936117855089212060"}, "user_tz": -330} id="_fnndVqc5um8" outputId="cc74829a-a7fb-4d36-d4fd-99a66edbad4a"
from sklearn.metrics import accuracy_score
accuracy_score(Y_test,Y_pred)
| 17. textClassificationProject/TextClassificationFinalDraft.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Train GPT on addition
#
# Train a GPT model on a dedicated addition dataset to see if a Transformer can learn to add.
# set up logging
import logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
# make deterministic
from mingpt.utils import set_seed
set_seed(42)
import numpy as np
import torch
import torch.nn as nn
from torch.nn import functional as F
# +
from torch.utils.data import Dataset
class AdditionDataset(Dataset):
"""
Returns addition problems of up to some number of digits in the inputs. Recall
that all GPT cares about are sequences of integers, and completing them according to
patterns in the data. Therefore, we have to somehow encode addition problems
as a sequence of integers.
The sum of two n-digit numbers gives a third up to (n+1)-digit number. So our
encoding will simply be the n-digit first number, n-digit second number,
and (n+1)-digit result, all simply concatenated together. Because each addition
problem is so structured, there is no need to bother the model with encoding
+, =, or other tokens. Each possible sequence has the same length, and simply
contains the raw digits of the addition problem.
As a few examples, the 2-digit problems:
- 85 + 50 = 135 becomes the sequence [8, 5, 5, 0, 1, 3, 5]
- 6 + 39 = 45 becomes the sequence [0, 6, 3, 9, 0, 4, 5]
etc.
We will also only train GPT on the final (n+1)-digits because the first
two n-digits are always assumed to be given. So when we give GPT an exam later,
we will e.g. feed it the sequence [0, 6, 3, 9], which encodes that we'd like
to add 6 + 39, and hope that the model completes the integer sequence with [0, 4, 5]
in 3 sequential steps.
fun exercise: does it help if the result is asked to be produced in reverse order?
"""
def __init__(self, ndigit, split):
self.split = split # train/test
self.ndigit = ndigit
self.vocab_size = 10 # 10 possible digits 0..9
# +1 due to potential carry overflow, but then -1 because very last digit doesn't plug back
self.block_size = ndigit + ndigit + ndigit + 1 - 1
# split up all addition problems into either training data or test data
num = (10**self.ndigit)**2 # total number of possible combinations
r = np.random.RandomState(1337) # make deterministic
perm = r.permutation(num)
num_test = min(int(num*0.2), 1000) # 20% of the whole dataset, or only up to 1000
self.ixes = perm[:num_test] if split == 'test' else perm[num_test:]
def __len__(self):
return self.ixes.size
def __getitem__(self, idx):
# given a problem index idx, first recover the associated a + b
idx = self.ixes[idx]
nd = 10**self.ndigit
a = idx // nd
b = idx % nd
c = a + b
render = f'%0{self.ndigit}d%0{self.ndigit}d%0{self.ndigit+1}d' % (a,b,c) # e.g. 03+25=28 becomes "0325028"
dix = [int(s) for s in render] # convert each character to its token index
# x will be input to GPT and y will be the associated expected outputs
x = torch.tensor(dix[:-1], dtype=torch.long)
y = torch.tensor(dix[1:], dtype=torch.long) # predict the next token in the sequence
y[:self.ndigit*2-1] = -100 # we will only train in the output locations. -100 will mask loss to zero
return x, y
# -
# create a dataset for e.g. 2-digit addition
ndigit = 2
train_dataset = AdditionDataset(ndigit=ndigit, split='train')
test_dataset = AdditionDataset(ndigit=ndigit, split='test')
train_dataset[0] # sample a training instance just to see what one raw example looks like
# +
from mingpt.model import GPT, GPTConfig, GPT1Config
# initialize a baby GPT model
mconf = GPTConfig(train_dataset.vocab_size, train_dataset.block_size,
n_layer=2, n_head=4, n_embd=128)
model = GPT(mconf)
# +
from mingpt.trainer import Trainer, TrainerConfig
# initialize a trainer instance and kick off training
tconf = TrainerConfig(max_epochs=50, batch_size=512, learning_rate=6e-4,
lr_decay=True, warmup_tokens=1024, final_tokens=50*len(train_dataset)*(ndigit+1),
num_workers=4)
trainer = Trainer(model, train_dataset, test_dataset, tconf)
trainer.train()
# +
# now let's give the trained model an addition exam
from torch.utils.data.dataloader import DataLoader
from mingpt.utils import sample
def give_exam(dataset, batch_size=32, max_batches=-1):
results = []
loader = DataLoader(dataset, batch_size=batch_size)
for b, (x, y) in enumerate(loader):
x = x.to(trainer.device)
d1d2 = x[:, :ndigit*2]
d1d2d3 = sample(model, d1d2, ndigit+1, force_most_likely=True)
d3 = d1d2d3[:, -(ndigit+1):]
factors = torch.tensor([[10**i for i in range(ndigit+1)][::-1]]).to(trainer.device)
# decode the integers from individual digits
d1i = (d1d2[:,:ndigit] * factors[:,1:]).sum(1)
d2i = (d1d2[:,ndigit:ndigit*2] * factors[:,1:]).sum(1)
d3i_pred = (d3 * factors).sum(1)
d3i_gt = d1i + d2i
correct = (d3i_pred == d3i_gt).cpu() # Software 1.0 vs. Software 2.0 fight RIGHT on this line, lol
for i in range(x.size(0)):
results.append(int(correct[i]))
judge = 'YEP!!!' if correct[i] else 'NOPE'
if not correct[i]:
print("GPT claims that %03d + %03d = %03d (gt is %03d; %s)"
% (d1i[i], d2i[i], d3i_pred[i], d3i_gt[i], judge))
if max_batches >= 0 and b+1 >= max_batches:
break
print("final score: %d/%d = %.2f%% correct" % (np.sum(results), len(results), 100*np.mean(results)))
# -
# training set: how well did we memorize?
give_exam(train_dataset, batch_size=1024, max_batches=10)
# test set: how well did we generalize?
give_exam(test_dataset, batch_size=1024, max_batches=-1)
# +
# well that's amusing... our model learned everything except 55 + 45
| play_math.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %run init.ipynb
preprocessor = mz.models.ConvKNRM.get_default_preprocessor()
train_pack_processed = preprocessor.fit_transform(train_pack_raw)
dev_pack_processed = preprocessor.transform(dev_pack_raw)
test_pack_processed = preprocessor.transform(test_pack_raw)
preprocessor.context
glove_embedding = mz.datasets.embeddings.load_glove_embedding(dimension=300)
term_index = preprocessor.context['vocab_unit'].state['term_index']
embedding_matrix = glove_embedding.build_matrix(term_index)
l2_norm = np.sqrt((embedding_matrix * embedding_matrix).sum(axis=1))
embedding_matrix = embedding_matrix / l2_norm[:, np.newaxis]
trainset = mz.dataloader.Dataset(
data_pack=train_pack_processed,
mode='pair',
num_dup=5,
num_neg=1
)
testset = mz.dataloader.Dataset(
data_pack=test_pack_processed
)
# +
padding_callback = mz.models.ConvKNRM.get_default_padding_callback()
trainloader = mz.dataloader.DataLoader(
dataset=trainset,
batch_size=20,
stage='train',
resample=True,
sort=False,
callback=padding_callback
)
testloader = mz.dataloader.DataLoader(
dataset=testset,
batch_size=20,
stage='dev',
callback=padding_callback
)
# +
model = mz.models.ConvKNRM()
model.params['task'] = ranking_task
model.params['embedding'] = embedding_matrix
model.params['filters'] = 128
model.params['conv_activation_func'] = 'tanh'
model.params['max_ngram'] = 3
model.params['use_crossmatch'] = True
model.params['kernel_num'] = 11
model.params['sigma'] = 0.1
model.params['exact_sigma'] = 0.001
model.build()
print(model)
print('Trainable params: ', sum(p.numel() for p in model.parameters() if p.requires_grad))
# +
optimizer = torch.optim.Adadelta(model.parameters())
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3)
trainer = mz.trainers.Trainer(
model=model,
optimizer=optimizer,
trainloader=trainloader,
validloader=testloader,
validate_interval=None,
epochs=10,
scheduler=scheduler,
clip_norm=10
)
# -
trainer.run()
| tutorials/ranking/conv_knrm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import lux
# In this tutorial, we look at how you can select visualizations of interest from the Lux widget and export them for further analysis. We will be working with the [Happy Planet Index](http://happyplanetindex.org/) dataset, which contains metrics related to well-being for 140 countries around the world.
df = pd.read_csv('https://github.com/lux-org/lux-datasets/blob/master/data/hpi.csv?raw=true')
lux.config.default_display = "lux" # Set Lux as default display
# For the convienience of this tutorial, we set Lux as the default display to avoid having to Toggle from the Pandas table display everytime.
# ## Exporting one or more visualizations from widget
# In Lux, you can click on visualizations of interest and export them into a separate widget for further processing.
# <img src="https://github.com/lux-org/lux-resources/blob/master/doc_img/export-1.gif?raw=true" width=700 alt="1) scroll through Correlation, then 2) click on any 3 visualization (let's say 2nd, 5th and something towards the end), then 3) click on the export button and make sure the blue message box show up">
df
# Select charts from the widget above and click on the export button to access them here.
df.exported
# From the dataframe recommendations, the visualization showing the relationship between `GDPPerCapita` and `Footprint` is very interesting. In particular, there is an outlier with extremely high ecological footprint as well as high GDP per capita. Select this visualization and click on the export button.
df
# Select the GDP by Footprint visualization and export it to the `gdp_footprint` variable
gdp_footprint = df.exported[0]
gdp_footprint
# ## Setting the Intent with a Vis input
# Often, we might be interested in other visualizations that is related to a visualization of interest and want to learn more. With the exported Vis, we can update the intent associated with dataframe to be based on the selected Vis to get more recommendations related to this visualization.
df.intent = gdp_footprint
df
# ## Exporting Visualizations as Code
# To allow further edits of visualizations, visualizations can be exported to code in [Matplotlib](https://matplotlib.org/), [Altair](https://altair-viz.github.io/), or as [Vega-Lite](https://vega.github.io/vega-lite/) specification.
print (gdp_footprint.to_altair())
# This can be copy-and-pasted back into a new notebook cell for further editing.
# +
import altair as alt
chart = alt.Chart(df).mark_circle().encode(
x=alt.X('Footprint',scale=alt.Scale(domain=(0.6, 15.8)),type='quantitative'),
y=alt.Y('GDPPerCapita',scale=alt.Scale(domain=(244, 105447)),type='quantitative')
)
chart = chart.configure_mark(tooltip=alt.TooltipContent('encoding')) # Setting tooltip as non-null
chart = chart.interactive() # Enable Zooming and Panning
chart = chart.configure_title(fontWeight=500,fontSize=13,font='Helvetica Neue')
chart = chart.configure_axis(titleFontWeight=500,titleFontSize=11,titleFont='Helvetica Neue',
labelFontWeight=400,labelFontSize=8,labelFont='Helvetica Neue',labelColor='#505050')
chart = chart.configure_legend(titleFontWeight=500,titleFontSize=10,titleFont='Helvetica Neue',
labelFontWeight=400,labelFontSize=8,labelFont='Helvetica Neue')
chart = chart.properties(width=160,height=150)
chart
# -
# You can also export this as Vega-Lite specification and vis/edit the specification on [Vega Editor](https://vega.github.io/editor).
#
#
print (gdp_footprint.to_vegalite())
# Visualizations can also be exported to code in [Matplotlib](https://matplotlib.org/).
print (gdp_footprint.to_matplotlib())
print (gdp_footprint.to_code(language="matplotlib"))
# +
import matplotlib.pyplot as plt
plt.rcParams.update(
{
"axes.titlesize": 20,
"axes.titleweight": "bold",
"axes.labelweight": "bold",
"axes.labelsize": 16,
"legend.fontsize": 14,
"legend.title_fontsize": 15,
"xtick.labelsize": 13,
"ytick.labelsize": 13,
}
)
import numpy as np
from math import nan
from matplotlib.cm import ScalarMappable
fig, ax = plt.subplots(figsize=(4.5, 4))
x_pts = df['InequalityAdjustedWellbeing']
y_pts = df['AverageWellBeing']
ax.scatter(x_pts, y_pts, alpha=0.5)
ax.set_xlabel('InequalityAdjus...dWellbeing', fontsize='15')
ax.set_ylabel('AverageWellBeing', fontsize='15')
fig
# -
| exercise/3-Export-Widget.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (dsblocks)
# language: python
# name: dsblocks
# ---
# +
# hide
# default_exp blocks.blocks
from nbdev.showdoc import *
from dsblocks.utils.nbdev_utils import nbdev_setup, TestRunner
nbdev_setup ()
tst = TestRunner (targets=['dummy'])
# + [markdown] tags=[]
# # Custom components
#
# > Custom components like split generator
# +
#export
import abc
import sklearn
import numpy as np
from dsblocks.core.components import Component
from dsblocks.config import bt_defaults as dflt
# +
#for tests
import numpy as np
import pandas as pd
import pytest
from sklearn.model_selection import KFold
# -
# ## Splitter
# + tags=[]
#export
class Splitter (Component):
def __init__ (self, training='train', validation='validation', test='test',
split_col='split', **kwargs):
super().__init__ (**kwargs)
def _apply (self, df):
result = dict(training=df[df[self.split_col]==self.training],
validation=df[df[self.split_col]==self.validation],
test=df[df[self.split_col]==self.test])
return {k:result[k] for k in result if not result[k].empty}
# + [markdown] tags=[] toc-hr-collapsed=true
# ### Example
# -
# exports tests.blocks.test_blocks
def test_splitter ():
df = pd.DataFrame ({'a': list(range(10)),
'b': list (range(10)),
'split': (['test','training','test','validation','test','training','validation']+
['test']*3)
})
dict_results = Splitter (training='training')(df)
reference = dict(training=[1,5],
validation=[3,6],
test=[0,2,4,7,8,9])
for k in ['training', 'validation', 'test']:
df = dict_results[k]
assert (df.columns == ['a','b','split']).all()
assert (df['split']==k).all()
assert (df.a == reference[k]).all()
assert (df.b == reference[k]).all()
# + tags=[]
tst.run (test_splitter, tag='dummy')
# -
# ## DoubleKFold
# ### DoubleKFoldBase
#export
class DoubleKFoldBase (metaclass=abc.ABCMeta):
def __init__ (self, cv, split_col='split', label_col='label', group_col=None, **kwargs):
self.cv = cv
self.n_splits = self.cv.get_n_splits ()
self.split_col = split_col
self.label_col = label_col
self.group_col = group_col
def get_n_splits (self):
return self.n_splits
@abc.abstractmethod
def split (self, df, y=None, groups=None):
pass
# ### SingleKFold
#export
class SingleKFold (DoubleKFoldBase):
def __init__ (self, cv, **kwargs):
super().__init__ (cv, **kwargs)
def split (self, df, y=None, groups=None):
groups = (groups if groups is not None
else df[self.group_col] if self.group_col is not None
else None)
y = y if y is not None else df[self.label_col]
self.generator = self.cv.split (df, y, groups=groups)
empty_array = np.array([])
for i in range(self.n_splits):
training, validation = next (self.generator)
yield training, validation, empty_array
# + [markdown] tags=[] toc-hr-collapsed=true
# #### Example / test
# -
# exports tests.blocks.test_blocks
def test_single_kfold ():
df = pd.DataFrame ({'a': list(range(10)),
'b': list (range(10)),
'label': [0]*5+[1]*5})
cv2 = SingleKFold (KFold (5))
generator = cv2.split (df)
expected = (
dict(training=[2, 3, 4, 5, 6, 7, 8, 9], validation=[0, 1]),
dict(training=[0, 1, 4, 5, 6, 7, 8, 9], validation=[2, 3]),
dict(training=[0, 1, 2, 3, 6, 7, 8, 9], validation=[4, 5]),
dict(training=[0, 1, 2, 3, 4, 5, 8, 9], validation=[6, 7]),
dict(training=[0, 1, 2, 3, 4, 5, 6, 7], validation=[8, 9])
)
for i in range (5):
training, validation, test = next (generator)
assert all(training==expected[i]['training'])
assert all(validation==expected[i]['validation'])
assert all(test == np.array([]))
# + tags=[]
tst.run (test_single_kfold, tag='dummy')
# -
# ### FixedDoubleKFold
#export
class FixedDoubleKFold (DoubleKFoldBase):
def __init__ (self, cv, input_test_label='test', **kwargs):
super().__init__ (cv, **kwargs)
self.input_test_label = input_test_label
def split (self, df, y=None, groups=None):
test = np.where(df[self.split_col]==self.input_test_label)[0]
training_cond = df[self.split_col] != self.input_test_label
training = np.where (training_cond)[0]
groups = (groups[training] if groups is not None
else df.loc[training_cond, self.group_col] if self.group_col is not None
else None)
y = (y[training] if y is not None else df.loc[training_cond, self.label_col])
self.generator = self.cv.split (df[training_cond], y, groups=groups)
for i in range(self.n_splits):
training_training, training_validation = next (self.generator)
validation_final, training_final = training[training_validation], training[training_training]
yield training_final, validation_final, test
# + [markdown] tags=[] toc-hr-collapsed=true
# #### Example / test
# -
# exports tests.blocks.test_blocks
def test_fixed_double_kfold ():
df = pd.DataFrame ({'a': list(range(20)),
'b': list (range(20)),
'split': ['training','test']*10,
'label': ([0]*5+[1]*5)*2})
cv2 = FixedDoubleKFold (KFold (5))
generator = cv2.split (df)
expected = (
dict(training=[4, 6, 8, 10, 12, 14, 16, 18], validation=[0, 2], test=[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]),
dict(training=[0, 2, 8, 10, 12, 14, 16, 18], validation=[4, 6], test=[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]),
dict(training=[0, 2, 4, 6, 12, 14, 16, 18], validation=[8, 10], test=[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]),
dict(training=[0, 2, 4, 6, 8, 10, 16, 18], validation=[12, 14], test=[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]),
dict(training=[0, 2, 4, 6, 8, 10, 12, 14], validation=[16, 18], test=[1, 3, 5, 7, 9, 11, 13, 15, 17, 19]),
)
for i in range (5):
training, validation, test = next (generator)
assert all(training==expected[i]['training'])
assert all(validation==expected[i]['validation'])
assert all(test==expected[i]['test'])
# + tags=[]
tst.run (test_fixed_double_kfold, tag='dummy')
# -
# ## SkSplitGenerator
#export
class SkSplitGenerator (Component):
def __init__ (self, split_generator, group_col=None, label_col=None, split_col=None,
use_splitter=False, training_label='training', validation_label='validation',
test_label='test', type_split='single', input_test_label='test', **kwargs):
super ().__init__ (**kwargs)
self.splitter = Splitter () if use_splitter else None
self.generator = None
if type_split == 'single':
self.split_generator = SingleKFold (split_generator, split_col=split_col, label_col=label_col,
group_col=group_col)
self.validation_label = self.test_label
elif type_split == 'fixed':
self.split_generator = FixedDoubleKFold (split_generator, input_test_label=input_test_label,
split_col=split_col, label_col=label_col,
group_col=group_col)
else:
raise NotImplementedError (f'type_split {type_split} not recognized')
def _fit_apply (self, X, y=None, **kwargs):
if self.generator is None: self.generator = self.split_generator.split (X, y=y, **kwargs)
training, validation, test = next (self.generator)
X = self._create_split (X, training, validation, test)
return X
def _apply (self, X, **kwargs):
training, validation, test = np.array([]), np.array([]), np.arange (X.shape[0])
X = self._create_split (X, training, validation, test)
return X
def _create_split (self, X, training, validation, test):
if self.split_col is not None:
X[self.split_col] = None
X[self.split_col].iloc[training] = self.training_label
X[self.split_col].iloc[validation] = self.validation_label
X[self.split_col].iloc[test] = self.test_label
else:
X = (X, (training, validation, test))
if self.use_splitter:
X = self.splitter (X)
return X
# + [markdown] tags=[] toc-hr-collapsed=true
# ### Example
# -
# exports tests.blocks.test_blocks
def test_sksplit_generator ():
df = pd.DataFrame ({'a': list(range(10)),
'b': list (range(10)),
'label': [0]*5+[1]*5})
df_original = df.copy()
generator = SkSplitGenerator (KFold (n_splits=5),
label_col='label',
split_col='split')
reference = pd.concat ([df_original, pd.DataFrame({'split': ['test']*2 + ['training']*8})], axis=1)
dfr=generator.fit_apply (df)
assert (reference==dfr).all().all()
dfr=generator.fit_apply (df)
reference = pd.concat ([df_original, pd.DataFrame({'split': ['training']*2 + ['test']*2 + ['training']*6})],
axis=1)
assert (reference==dfr).all().all()
dfr=generator.apply (df)
reference = pd.concat ([df_original, pd.DataFrame({'split': ['test']*10})], axis=1)
assert (reference==dfr).all().all()
# + tags=[]
tst.run (test_sksplit_generator, tag='dummy')
# +
df = pd.DataFrame ({'a': list(range(9)),
'b': list (range(9)),
'label': [0]*5+[1]*4})
df_original = df.copy()
generator = SkSplitGenerator (KFold (n_splits=5),
label_col='label',
split_col='split')
for i in range(5):
dfr=generator.fit_apply (df)
display(dfr)
# + [markdown] tags=[]
# ## Evaluator
# + tags=[]
#export
class PandasEvaluator (Component):
def __init__ (self, classification_metrics='accuracy_score', regression_metrics=[], custom_metrics=[],
groundtruth_col='label', prediction_col='pred', classification_col='classification', **kwargs):
classification_metrics = self._get_metrics (classification_metrics)
regression_metrics = self._get_metrics (regression_metrics)
super().__init__ (**kwargs)
def _get_metrics (self, metrics):
metrics = [metrics] if isinstance (metrics, str) else metrics
for i, metric in enumerate(metrics):
metrics[i] = getattr(sklearn.metrics, metrics[i]) if isinstance(metrics[i], str) else metrics[i]
return metrics
def _apply (self, df, **kwargs):
dict_results = {metric.__name__: metric (df[self.groundtruth_col], df[self.classification_col])
for metric in self.classification_metrics}
dict_results.update( {metric.__name__: metric (df[self.groundtruth_col], df[self.prediction_col])
for metric in self.regression_metrics})
for metric in self.custom_metrics:
dict_results.update (metric (df, label_col=self.groundtruth_col, prediction_col=self.prediction_col,
classification_col=self.classification_col))
return dict_results
# + [markdown] tags=[] toc-hr-collapsed=true
# ### Example
# + tags=[]
# exports tests.blocks.test_blocks
def test_pandas_evaluator ():
df = pd.DataFrame ({'a': list(range(10)),
'b': list (range(10)),
'label': [0]*5+[1]*5,
'classification': [0]*4+[1]*6})
assert PandasEvaluator ()(df) == {'accuracy_score': 0.9}
evaluator = PandasEvaluator (classification_metrics=['accuracy_score', 'auc'],
regression_metrics=['mean_squared_error', 'max_error'],
prediction_col='classification')
assert evaluator (df)=={'accuracy_score': 0.9, 'auc': 1.0, 'mean_squared_error': 0.1, 'max_error': 1}
# + tags=[]
tst.run (test_pandas_evaluator, tag='dummy')
| nbs/blocks/blocks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Gramáticas de Lindenmayer
#
# Los [**sistemas de Lindenmayer**](https://es.wikipedia.org/wiki/Sistema-L), o **Sistemas-L** (L-systems), porporcionan una técnica
# muy potente para la generación de imágenes fractales. Lindenmayer usó los sistemas-L
# para describir el comportamiento de celulas vegetales y para modelar el proceso
# de crecimiento de una planta.
#
# Un sistema-L es un sistema de reescritura, y formalmente son un tipo
# de gramática. Consiste en un conjunto de **símbolos** o alfabeto
# (que poedemos representar con caracteres o cadenas de texto), un
# conjunto de **reglas de producción** que pueden expandir cada símbolo
# por una cadena más larga de símbolos, un **axioma**, que es un símbolo o
# cadena de símbolos inicial, y algún mecanismo para convertir las cadenas
# generadas en forma de estructuras geométricas.
#
# ### Un ejemplo sencilla
#
# Para ver un ejemplo sencillo, veamos el sistema-L original que usó Lindenmayer
# para modelar el crecimiento de un alga. En este sistema trabajamos con
# solo dos símbolos, `A` y `B`. Las reglas son solo dos:
#
# - Cuando encontremos `A`, cambiarlo por `AB` (`A -> AB`)
# - Cuando encontremos `B`, reemplazarlo con A (`B -> A`)
#
# El axioma o estado inicial es `A`.
#
# Aplicando la primera regla generamos
# `AB` y terminamos esta iteración. Empezamos ahora con `AB`. De nuevo,
# sustituimos `A` por `AB`, pasamos al segundo caracter, `B`, y lo reemplazamos
# por `A` siguiendo la segunda regla, con lo que el resultado final es
# `ABA`. En la tercera iteración, se obtiene la cadena de texto `ABAAB`, en
# la cuarta `ABAABABA`, y así sucesivamente.
#
# Vamos a hacer una primera aproximación para generar sucesivas iteraciones:
# +
# Definamos las reglas
def rule1(c):
if c == 'A':
return 'AB'
def rule2(c):
if c == 'B':
return 'A'
rules = set([rule1, rule2])
# El axima o estado inicial
initial = 'A'
# La función que pasa de una cadena de simbolos a la siguiente
def next(s):
result = []
for c in s: # Las reglas se aplican a toda la secuencia
for rule in rules:
new_item = rule(c)
if new_item is not None:
result.append(new_item)
return ''.join(result)
status = initial
for i in range(5):
print('Generación:', i, 'Status:', status, len(status))
status = next(status)
print('Generación:', i, 'Status:', status, len(status))
# -
# **Nota:** La longitud de cada una de estas cadenas consecutivas es 1, 2, 3, 5,
# 8... ¿Le recuerda algo? Pista: Añadir otro uno antes de la secuencia.
# ## Un ejemplo más complejo
#
# Podemos extender este sistema básico con algunas simbolos, que podemos interpretar
# como instrucciones, como por ejemplo, `-` para indicar `gira hacia la izquierda en un determinado ángulo`
# y `+` para indicar giro a la derecha. `F` podría significar `Sigue adelante`.
#
# Estas instrucciones pueden ser representadas de forma muy fácil usando
# gráficos de tipo tortuga. Por ejemplo, la [Curva de Hilbert](https://es.wikipedia.org/wiki/Curva_de_Hilbert)
# puede dibujarse mediante las siguientes reglas:
#
# A -> − B F + A F A + F B −
#
# B -> + A F − B F B − F A +
#
# Vamos a crear una clase par representar las reglas:
# +
class Rule:
def __init__(self, target, production):
self.target = target
self.production = production
def __call__(self, s):
if s == self.target:
return self.production
def __str__(self):
return '{} -> {}'.format(self.target, self.production)
rules = set([
Rule('A', 'AB'),
Rule('B', 'A'),
])
initial = 'A'
# La función que pasa de una cadena de simbolos a la siguiente
def next(s):
result = []
for c in s: # Las reglas se aplican a toda la secuencia
for rule in rules:
new_item = rule(c)
if new_item is not None:
result.append(new_item)
return ''.join(result)
status = initial
for i in range(5):
print('Generación:', i, 'Status:', status, len(status))
status = next(status)
print('Generación:', i, 'Status:', status, len(status))
def rule2(c):
if c == 'B':
return 'A'
# -
# Las reglas completas para la producción del alga roja son:
#
# a -> b|c
# b -> b
# c -> b|d
# d -> e\d
# e -> f
# f -> g
# g -> h(a)
# h -> h
# ( -> (
# ) -> )
# | -> |
# / -> \
# \ -> /
#
# +
rules = set([
Rule('a', 'b|c'),
Rule('b', 'b'),
Rule('c', 'b|d'),
Rule('d', 'e\\d'),
Rule('e', 'f'),
Rule('f', 'g'),
Rule('g', 'h(a)'),
Rule('h', 'h'),
Rule('(', '('),
Rule(')', ')'),
Rule('|', '|'),
Rule('/', '\\'),
Rule('\\', '/'),
])
initial = 'a'
status = initial
for i in range(8):
print('Generación:', i, 'Status:', status, len(status))
status = next(status)
print('Generación:', i+1, 'Status:', status, len(status))
# +
from PIL import Image, ImageDraw
green = (33, 233, 33)
img = Image.new('RGB', (500, 500), color=(233,233,233))
x = 5
y = 250
print(status)
def pipe():
global x, y, img, green
draw = ImageDraw.Draw(img)
lines = [
(x+5, y-5),
(x+5, y+5)
]
draw.line(lines, green)
def b():
global x, y, img, green
draw = ImageDraw.Draw(img)
lines = [
(x+5, y-5),
(x-5, y-5),
(x-5, y+5),
(x+5, y+5)
]
draw.line(lines, green, width=2)
x += 10
b(); pipe(); b()
# -
img
# +
rules = set([
Rule('F', 'F[+F]F[-F]F'),
Rule('[', '['),
Rule(']', ']'),
Rule('+', '+'),
Rule('-', '-'),
])
initial = 'F'
status = initial
for i in range(5):
print('Generación:', i, 'Status:', status, len(status))
status = next(status)
print('Generación:', i+1, 'Status:', status, len(status))
# -
import math, tortuga
t = tortuga.Turtle((500, 500), pos=tortuga.Vector(0, 250))
t.angle = 25.7 * math.pi / 180.
for c in status:
if c == 'F':
t.forward(2)
elif c == '+':
t.left()
elif c == '+':
t.right()
elif c == '[':
t.push()
elif c == ']':
t.pop()
t.img
t = tortuga.Turtle((500, 500))
t.angle = 25.7 * math.pi / 180.0
t.forward(50)
t.push()
t.right()
t.forward(50)
t.pop()
t.forward(50)
t.push()
t.left()
t.forward(50)
t.pop()
t.forward(50)
t.img
# +
rules = set([
Rule('F', 'F[+F]F[-F][FF]'),
Rule('[', '['),
Rule(']', ']'),
Rule('+', '+'),
Rule('-', '-'),
])
initial = 'F'
status = initial
for i in range(5):
status = next(status)
t = tortuga.Turtle((500, 500), pos=tortuga.Vector(0, 250))
t.angle = 20. * math.pi / 180.
for c in status:
if c == 'F':
t.forward(4)
elif c == '+':
t.left()
elif c == '-':
t.right()
elif c == '[':
t.push()
elif c == ']':
t.pop()
t.img
# -
| sistemas-de-lindenmayer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import random
import time
import sys
import tweepy
import credentials
import urllib.request
from os import environ
import gc
CONSUMER_KEY = credentials.CONSUMER_KEY
CONSUMER_SECRET = credentials.CONSUMER_SECRET
ACCESS_KEY = credentials.ACCESS_KEY
ACCESS_SECRET = credentials.ACCESS_SECRET
FORECAST_APIKEY = credentials.FORECAST_APIKEY
# +
def get_quotes():
with open('quotes.json') as f:
quotes_json = json.load(f)
return quotes_json['quotes']
def get_random_quote():
quotes = get_quotes()
random_quote = random.choice(quotes)
return random_quote
def create_quote():
quote = get_random_quote()
quote = """
{}
~{}
""".format(quote['quote'], quote['author'])
return quote
def get_weather():
req = urllib.request.Request(url=f'https://api.openweathermap.org/data/2.5/weather?q=Atlanta&units=imperial&appid='+FORECAST_APIKEY)
with urllib.request.urlopen(req) as resp:
data = json.loads(resp.read().decode("utf-8"))
gc.collect()
return data
def create_tweet():
data=get_weather()
temperature = str(int(round(data['main']['temp'])))
degree_sign = u'\N{DEGREE SIGN}'
description = data['weather'][0]['description']
#description = data['current']['weather'][0]['description']
tweet = "Rise Up @ATLtrackclub ATL Runners! It's currently " + temperature + degree_sign + "F and " + str(description) +". Time for a run!" + create_quote()+"\n #morningmotivation #running #atlanta #atlantatrackclub"
if len(tweet) > 280:
tweet = "Rise Up @ATLtrackclub ATL Runners! It's currently " + temperature + degree_sign + "F and " + str(description)+". Time for a run! \n #morningmotivation #running #atlanta #atlantatrackclub #runningcommunity #atlantarunners"
return tweet
def tweet_quote():
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_KEY, ACCESS_SECRET)
api = tweepy.API(auth)
tweet = create_tweet()
status = api.update_status(tweet)
print(status.id)
if __name__ == "__main__":
tweet_quote()
# +
def get_quotes():
with open('quotes.json') as f:
quotes_json = json.load(f)
return quotes_json['quotes']
def get_random_quote():
quotes = get_quotes()
random_quote = random.choice(quotes)
return random_quote
def create_quote():
quote = get_random_quote()
quote = """
{}
~{}
""".format(quote['quote'], quote['author'])
return quote
def get_weather():
req = urllib.request.Request(url=f'https://api.openweathermap.org/data/2.5/weather?q=Atlanta&units=imperial&appid='+FORECAST_APIKEY)
with urllib.request.urlopen(req) as resp:
data = json.loads(resp.read().decode("utf-8"))
gc.collect()
return data
def create_tweet():
data=get_weather()
temperature = str(int(round(data['main']['temp'])))
degree_sign = u'\N{DEGREE SIGN}'
description = data['weather'][0]['description']
#description = data['current']['weather'][0]['description']
tweet = "Rise Up @ATLtrackclub ATL Runners! It's currently " + temperature + degree_sign + "F and " + str(description) +". Time for a run!" + create_quote()+"\n #morningmotivation #running #atlanta #atlantatrackclub"
if len(tweet) > 280:
tweet = "Rise Up @ATLtrackclub ATL Runners! It's currently " + temperature + degree_sign + "F and " + str(description)+". Time for a run! \n #morningmotivation #running #atlanta #atlantatrackclub #runningcommunity #atlantarunners"
return tweet
# -
create_tweet()
| twitter_bot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/snoop2head/OIA_Text_Wrangling/blob/master/_Department_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="YQloztsb3XgM" colab_type="code" outputId="373d7fe1-585a-4f4a-9c89-0b65fe7f5ae8" colab={"base_uri": "https://localhost:8080/", "height": 52}
import pandas as pd
from pandas.api.types import CategoricalDtype # 그래프의 값을 정렬해서 보기위해
import numpy as np
print(pd.__version__)
print(np.__version__)
# + id="AYGdRpMV4ZuG" colab_type="code" outputId="0215eef6-72da-4f11-950d-d5db8962df6f" colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount("/content/gdrive")
# + id="sXs9mLyR4a0C" colab_type="code" outputId="c5a34be4-aeb1-4862-f50c-2bf5b70c588a" colab={"base_uri": "https://localhost:8080/", "height": 34}
file_name = "_department_t.csv"
path = "/content/gdrive/My Drive/_OIA_Project/" + file_name
df = pd.read_csv(path)
# 데이터의 크기가 어느정도인지 본다.
df.shape
# + id="AW3ZMcPD6Ekm" colab_type="code" outputId="a2b044ca-4f25-4938-ed2d-9ef430cbf721" colab={"base_uri": "https://localhost:8080/", "height": 669}
df
# + id="ssbmCp7jmgYm" colab_type="code" colab={}
df.columns = df.iloc[0]
# + id="mBltlIIt7t-8" colab_type="code" colab={}
header_list = df.columns.to_list()
# + id="wOsVHTromiAP" colab_type="code" outputId="bf970ffe-27f4-47ce-8c9c-12fd032dd554" colab={"base_uri": "https://localhost:8080/", "height": 652}
df = df[1:]
df
# + id="PlD-NKoM5W6v" colab_type="code" colab={}
import numpy as np
def single_dp_dict(df_column):
# university_name = df_column[0]
single_column = df_column[0:]
single_column_list = single_column.to_list()
column_lst_without_nan = [x for x in single_column_list if x == x]
splitted_list = []
for i in column_lst_without_nan:
if "/" in i:
# print(i)
double_element = i.split("/")
# print(double_element)
# splitted_list.remove(i)
splitted_list += double_element
elif "," in i:
double_element = i.split(",")
splitted_list += double_element
else:
splitted_list.append(i)
pass
splitted_list
from collections import defaultdict
fq= defaultdict( int )
for w in splitted_list:
fq[w] += 1
number_of_departments = len(splitted_list)
# print(number_of_departments)
# print(university_name)
dictionary = dict(fq)
return dictionary
# + id="12vOk_4M0SV7" colab_type="code" outputId="b5fbe644-96d8-4b1c-8d51-ff54651dd7f9" colab={"base_uri": "https://localhost:8080/", "height": 34}
from statistics import variance
# df_column = df[header_list[25]]
# df_column1 = df['Aalto University']
# # single_dict = single_dp_dict(df_column1)
# print(single_dict)
no_of_students('Aalto University')
# variance1 = variance(single_dict[k] for k in single_dict)
# df_column2 = df['York University: Schulich School of Business']
# single_dict = single_dp_dict(df_column2)
# print(single_dict)
# variance2 = variance(single_dict[k] for k in single_dict)
# print(variance1, variance2)
# + id="PA1TeyVM09wv" colab_type="code" colab={}
def fn_univ_variance(univ_name):
df_column = df[univ_name]
single_dict = single_dp_dict(df_column)
var = variance(single_dict[k] for k in single_dict)
return var
def no_of_students(univ_name):
df_column = df[univ_name]
# print(df_column)
single_dict = single_dp_dict(df_column)
# print(single_dict)
no_of_students = sum(single_dict[k] for k in single_dict)
return no_of_students
def no_of_departments(univ_name):
df_column = df[univ_name]
# print(df_column)
single_dict = single_dp_dict(df_column)
no_of_departments = len(single_dict)
return no_of_departments
# + id="UXDXkwxaea2-" colab_type="code" colab={}
# list_of_dict = []
# for i in header_list:
# df_column = df[i]
# single_dict = single_dp_dict(df_column)
# list_of_dict.append(single_dict)
# department_matrix = pd.DataFrame(list_of_dict)
# department_matrix.fillna(0)
# department_matrix.to_csv("/content/gdrive/My Drive/_OIA_Project/_department_matrix_mark4.csv",index=False,encoding="utf-8")
# + id="4tP4qygmw94m" colab_type="code" colab={}
# department_matrix.plot.hist()
# + id="330_qIR4puye" colab_type="code" colab={}
# p = r'.*(UD|Econ|UIC).*'
# finance = df[df['title'].str.match(p) |
# df['content'].str.match(p, flags=re.MULTILINE)]
# finance.shape
# + id="Zk9Os4Z__yEY" colab_type="code" outputId="c4a92339-848a-4ae9-a41f-e04b2b29c51e" colab={"base_uri": "https://localhost:8080/", "height": 408}
var_list = []
for univ in header_list:
var = fn_univ_variance(univ)
students_no = no_of_students(univ)
department_no = no_of_departments(univ)
var_dict = {'name':univ,
'variance':var,
'no_of_students':students_no,
'no_of_departments':department_no}
var_list.append(var_dict)
depart_var_df = pd.DataFrame(var_list)
depart_var_df
# depart_var_df.to_csv("/content/gdrive/My Drive/_OIA_Project/_department_var_df_mark1.csv",index=False,encoding="utf-8")
# + id="k6m4dnScao6b" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 559} outputId="d1dae6ae-54af-461c-d473-db4c6fb491e1"
# clustering dataset
# determine k using elbow method
from sklearn.cluster import KMeans
from sklearn import metrics
from scipy.spatial.distance import cdist
import numpy as np
import matplotlib.pyplot as plt
x1 = training_df['var'] #this is x axis
x2 = training_df['size'] #this is y axis
plt.plot()
plt.xlim([-10, 200])
plt.ylim([0, 200])
plt.title('Dataset')
plt.scatter(x1, x2)
plt.show()
# create new plot and data
plt.plot()
X = np.array(list(zip(x1, x2))).reshape(len(x1), 2)
colors = ['b', 'g', 'r']
markers = ['o', 'v', 's']
# k means determine k
distortions = []
K = range(1,10)
for k in K:
kmeanModel = KMeans(n_clusters=k).fit(X)
kmeanModel.fit(X)
distortions.append(sum(np.min(cdist(X, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0])
# Plotting the elow
plt.plot(K, distortions, 'bx-')
plt.xlabel('k')
plt.ylabel('Distortion')
plt.title('The Elbow Method showing the optimal k')
plt.show()
# + id="fnjA0n6tz0qq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 351} outputId="c741646d-d963-474a-fd4c-2afd1a08ed6e"
from pandas import DataFrame
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
#variance as x axis
#number of students as y axis
training_df = pd.concat([depart_var_df['variance'], depart_var_df['no_of_students']], axis=1, keys=['var', 'size'])
kmeans = KMeans(n_clusters=4).fit(training_df)
centroids = kmeans.cluster_centers_
print(centroids)
plt.scatter(training_df['var'], training_df['size'], c= kmeans.labels_.astype(float), s=30, alpha=0.5)
plt.scatter(centroids[:, 0], centroids[:, 1], c='red', s=50)
# + id="OB7OdniObbXa" colab_type="code" colab={}
| cluster_departments_Kmeans.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# $$
# \frac{dy}{dt} = -ky
# $$
# where $y(0)=y_0 \sim N(0,1)$, $k\sim N(0,1)$ independent
#
# $$
# y(t,k,y_0) = \sum_{i=0}^{P(P+3)/2} y_i(t) Q_i(\xi_1, \xi_2)
# $$
# where $\xi_1, \xi_2\sim N(0,1)$ and they are independent
# | 1 | 2 | 3 | 4 | 5 |
# | --- | --- | --- | --- | --- |
# | $1$ | | | | |
# | $H_1$ | $J_1$ | | | |
# | $H_2$ | $H_1J_1$ | $J_2$ | | |
# | $H_3$ | $H_2J_1$ | $H_1J_2$ | $J_3$ | |
# | $H_4$ | $H_3J_1$ | $H_2J_2$ | $H_1J_3$ | $J_4$ |
#
# | 1 | 2 | 3 | 4 | 5 |
# | --- | --- | --- | --- | --- |
# | $Q_0$ | | | | |
# | $Q_1$ | $Q_2$ | | | |
# | $Q_3$ | $Q_4$ | $Q_5$ | | |
# | $Q_6$ | $Q_7$ | $Q_8$ | $Q_9$ | |
# | $Q_{10}$ | $Q_{11}$ | $Q_{12}$ | $Q_{13}$ | $Q_{14}$ |
# $$
# k = \sum_{i=0}^P k_i H_i(\xi_1) = \sum_{i=0}^P k_i Q_{i(i+1)/2}
# $$
#
# $$
# \sum_{l=0}^{P(P+3)/2} \frac{d y_l(t)}{dt} Q_l(\xi_1, \xi_2) = - \left(\sum_{i=0}^P k_i Q_{i(i+1)/2}\right) \left(\sum_{j=0}^{P(P+3)/2} y_j(t) Q_j(\xi_1, \xi_2)\right)
# $$
#
# $$
# \frac{dy_l(t)}{dt} = - \frac{1}{\langle Q_l^2\rangle} \sum_{i=0}^P\sum_{j=0}^{P(P+3)/2} k_i y_j \langle Q_{i(i+1)/2}Q_jQ_l\rangle
# $$
# +
import numpy as np
import timeit
import numpy.polynomial.hermite_e as H
from math import factorial
from scipy.stats import norm
from scipy.integrate import odeint
from matplotlib import pyplot as plt
# %matplotlib notebook
# +
def Phi(n):
#define H_n
coeffs = [0]*(n+1)
coeffs[n] = 1
return coeffs
def inner2_herm(n): ###return the denominator when computing $k_i$
return factorial(n)
def product3_herm(i,j,l):
#compute \Phi_i*\Phi_j*\Phi_l
return lambda x: H.hermeval(x, H.hermemul(H.hermemul(Phi(i),Phi(j)),Phi(l)))
def inner3_herm(P,i,j,l):
#compute <\Phi_i\Phi_j\Phi_l>
#Set up Gauss-Hermite quadrature, weighting function is exp^{-x^2}
m=(P+1)**2
x, w=H.hermegauss(m)
inner=sum([product3_herm(i,j,l)(x[idx]) * w[idx] for idx in range(m)])
return inner/np.sqrt(2*np.pi) #because of the weight
# +
P=4
ki_herm = [0,1]+[0]*(P-1)
Inner3_herm = np.zeros((P+1,P+1,P+1)) #store all inner3_herm values
Inner2_herm = np.zeros(P+1)
for i in range(P+1):
for j in range(P+1):
for l in range(P+1):
Inner3_herm[i,j,l] = inner3_herm(P,i,j,l)
for i in range(P+1):
Inner2_herm[i] = inner2_herm(i)
# -
def index(i):
if i == 0:
return np.array([0, 0])
elif i == 1:
return np.array([1, 0])
elif i ==2:
return np.array([0, 1])
else:
for n in range(2,P+1):
q=2
if i // np.int((n+2)*(n-1)/2) >=1 and i // np.int((n+3)*n/2+1) ==0:
q = n
v = i % np.int((q+2)*(q-1)/2+1)
w = np.int(q-v)
break
return np.array([w,v])
index(4)
# \begin{align*}
# \langle Q_i Q_j Q_l \rangle &= \langle H_{index(i)[0]}J_{index(i)[1]}H_{index(j)[0]}J_{index(j)[1]}H_{index(l)[0]}J_{index(l)[1]} \rangle\\
# & = \langle H_{index(i)[0]}H_{index(j)[0]}H_{index(l)[0]}\rangle \langle J_{index(i)[1]}J_{index(j)[1]}J_{index(l)[1]}\rangle\\
# & = Inner3_{herm}[index(i)[0],index(j)[0],index(l)[0]] \times Inner3_{herm}[index(i)[1],index(j)[1],index(l)[1]]
# \end{align*}
# \begin{align*}
# \langle Q_i^2 \rangle &= \langle H_{index(i)[0]}J_{index(i)[1]}H_{index(i)[0]}J_{index(i)[1]}\\
# &= \langle H_{index(i)[0]}H_{index(i)[0]}\rangle \langle J_{index(i)[1]}J_{index(i)[1]}\rangle\\
# & = Inner2_{herm}[index(i)[0]] \times Inner3_{herm}[index(i)[1]]
# \end{align*}
# +
P = 4
ki_herm = [0,1]+[0]*(P-1)
# when P=4, the largest index of Q is P(P+3)/2
z = np.int(P*(P+3)/2+1)
Inner3_q = np.zeros((z,z,z)) #store all inner3_q values
Inner2_q = np.zeros(z)
for i in range(z):
for j in range(z):
for l in range(z):
a = index(i)[0]
b = index(j)[0]
c = index(l)[0]
d = index(i)[1]
e = index(j)[1]
f = index(l)[1]
Inner3_q[i,j,l] = Inner3_herm[a,b,c]*Inner3_herm[d,e,f]
for i in range(z):
a = index(i)[0]
b = index(i)[1]
Inner2_q[i] = Inner2_herm[a]*Inner2_herm[b]
# -
# $$
# \frac{dy_l(t)}{dt} = - \frac{1}{\langle Q_l^2\rangle} \sum_{i=0}^P\sum_{j=0}^{P(P+3)/2} k_i y_j \langle Q_{i(i+1)/2}Q_jQ_l\rangle
# $$
def ode_system_q(y, t, P):
#P indicates the highest degree
dydt = np.zeros(np.int(P*(P+3)/2+1))
for l in range(len(dydt)):
dydt[l] = -(sum(sum(Inner3_q[np.int(i*(i+1)/2),j,l]*ki_herm[i]*y[j] for j in range(np.int(P*(P+3)/2+1))) for i in range(P+1)))/Inner2_q[l]
return dydt
# +
y_init = [0.0, 0.0, 1.0]+[0.0]*np.int(P*(P+3)/2-2)
sol_q = odeint(ode_system_q, y_init, np.linspace(0,1,101), args=(P, ))
# -
# **Analytically**
# $$
# y(t,k,y_0) = y_0 e^{-kt}
# $$
#
# \begin{align*}
# \bar{y}(t) &= \int_Y\int_X y_0 e^{-kt}\ d f_1(k)\ d f_2(y_0)\\
# &= \int_Y y_0\ d f_2(y_0) \int_X e^{-kt}\ d f_1(k)\\
# &=0
# \end{align*}
# $Y$ is the support of pdf of $\xi_2$, $X$ is the support of pdf of $\xi_1$
# **PC:**
# $$
# y(t,k,y_0) = \sum_{i=0}^{P(P+3)/2} y_i(t) Q_i(\xi_1, \xi_2)
# $$
#
# $$
# \bar{y}(t) = y_0(t)
# $$
sol_q[100,0] #return y_0(t=1)
| 2-d-pre.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from nets import *
from cfgs import *
from data import *
from clip_ops.clip_ops import *
from trainer import *
# +
# %matplotlib inline
save_plot = False
plt.rcParams.update({'font.size': 10, 'axes.labelsize': 'x-large'})
D = 201
x = np.linspace(0, 1.0, D)
X_tst = np.stack([v.flatten() for v in np.meshgrid(x,x)], axis = -1)
X_tst = np.expand_dims(X_tst, 1)
print(X_tst.shape)
cfg = additive_1x2_uniform_config.cfg
cfg.test.num_misreports = 1
cfg.test.gd_iter = 0
cfg.test.batch_size = D
cfg.test.num_batches = int(X_tst.shape[0]/cfg.test.batch_size)
cfg.test.save_output = True
# -
Net = additive_net.Net
Generator = uniform_01_generator.Generator
clip_op_lambda = (lambda x: clip_op_01(x))
Trainer = trainer.Trainer
net = Net(cfg)
generator = Generator(cfg, 'test', X_tst)
clip_op_lambda = (lambda x: tf.assign(x, tf.clip_by_value(x, 0.0, 1.0)))
m = Trainer(cfg, "test", net, clip_op_lambda)
m.test(generator)
alloc = np.load(os.path.join(cfg.dir_name, "alloc_tst_" + str(cfg.test.restore_iter) + ".npy")).reshape(D,D,2)
pay = np.load(os.path.join(cfg.dir_name, "pay_tst_" + str(cfg.test.restore_iter) + ".npy")).reshape(D,D,1)
# +
x1 = (2.0 - np.sqrt(2.0))/3.0
x2 = 2.0/3.0
points = [(x1, 1.0), (x1, x2), (x2, x1), (x2, 0)]
x = list(map(lambda x: x[0], points))
y = list(map(lambda x: x[1], points))
fig, ax = plt.subplots(ncols = 1, nrows = 1, figsize = (8,6))
plt.plot(x, y, linewidth = 2, linestyle = '--', c='black')
img = ax.imshow(alloc[::-1, :, 0], extent=[0,1,0,1], vmin = 0.0, vmax=1.0, cmap = 'YlOrRd')
plt.text(0.25, 0.25, s='0', color='black', fontsize='10', fontweight='bold')
plt.text(0.65, 0.65, s='1', color='black', fontsize='10', fontweight='bold')
ax.set_xlabel('$v_1$')
ax.set_ylabel('$v_2$')
plt.title('Prob. of allocating item 1')
_ = plt.colorbar(img, fraction=0.046, pad=0.04)
if save_plot:
fig.set_size_inches(4, 3)
plt.savefig(os.path.join(cfg.dir_name, 'alloc1.pdf'), bbox_inches = 'tight', pad_inches = 0.05)
# +
x1 = (2.0 - np.sqrt(2.0))/3.0
x2 = 2.0/3.0
points = [(0.0, x2), (x1, x2), (x2, x1), (1.0, x1)]
x = list(map(lambda x: x[0], points))
y = list(map(lambda x: x[1], points))
plt.rcParams.update({'font.size': 10, 'axes.labelsize': 'x-large'})
fig, ax = plt.subplots(ncols = 1, nrows = 1, figsize = (8,6))
plt.plot(x, y, linewidth = 2, linestyle = '--', c='black')
img = ax.imshow(alloc[::-1, :, 1], extent=[0,1,0,1], vmin = 0.0, vmax=1.0, cmap = 'YlOrRd')
plt.text(0.25, 0.25, s='0', color='black', fontsize='10', fontweight='bold')
plt.text(0.65, 0.65, s='1', color='black', fontsize='10', fontweight='bold')
ax.set_xlabel('$v_1$')
ax.set_ylabel('$v_2$')
plt.title('Prob. of allocating item 2')
_ = plt.colorbar(img, fraction=0.046, pad=0.04)
if save_plot:
fig.set_size_inches(4, 3)
plt.savefig(os.path.join(cfg.dir_name, 'alloc2.pdf'), bbox_inches = 'tight', pad_inches = 0.05)
# -
| regretNet/visualize_additive_1x2_uniform.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_pytorch_p36
# language: python
# name: conda_pytorch_p36
# ---
# # ResNet34 - 2 Classes - Experiments
# Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`.
#
# In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!
#
# Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
from fastai.vision import *
from fastai.metrics import error_rate
# If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
bs = 64
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
# ## Looking at the data
# We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [<NAME> et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!
#
# We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
help(untar_data)
#path = untar_data(URLs.PETS); path
path = Path(r'/home/ec2-user/SageMaker/classify-streetview/images')
path
path.ls()
#path_anno = path/'annotations'
path_img = path
# The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.
#
# The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
fnames = get_image_files(path_img)
fnames[:5]
tfms = get_transforms(do_flip=False)
#data = ImageDataBunch.from_folder(path_img, ds_tfms=tfms, size=224)
# +
#np.random.seed(2)
#pat = r'/([^/]+)_\d+.jpg$'
# +
#data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
# ).normalize(imagenet_stats)
# +
classes_list = ['1_null', '3_present']
# https://docs.fast.ai/vision.data.html#ImageDataBunch.from_folder
data = ImageDataBunch.from_folder(path, ds_tfms = tfms, size = 224, classes = classes_list, bs=bs).normalize()
# -
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
# ## Training: resnet34
# Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).
#
# We will train for 4 epochs (4 cycles through all our data).
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1-2class')
# ## Results
# Let's see what results we have got.
#
# We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly.
#
# Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
# +
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
# -
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(4,4), dpi=60)
interp.most_confused(min_val=2)
# ## Unfreezing, fine-tuning, and learning rates
# Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
# That's a pretty accurate model!
# ## Training: resnet50
# Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).
#
# Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
# It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
# If it doesn't, you can always go back to your previous model.
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
# ## Other data formats
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
| model2-less-classes/2020-03-30-resnet34-2classes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''venv'': venv)'
# language: python
# name: python38564bitvenvvenv83ba967ace5240319bcf0470eba02fe6
# ---
# +
# !which python; python -V;
# This makes the diagrams to more reliably appear in Jupyter environment
import plotly.io as pio
pio.renderers.default = "notebook_connected"
# This will cause the ephemerides to be imported from JPL horizons system
from astropy.coordinates import solar_system_ephemeris
solar_system_ephemeris.set("jpl")
# -
# We will use the interplanetary package from perylune to calculate interplanetary transfers
from perylune.interplanetary import *
# +
# These are the Hohmann trajectiories we want to investigate.
trajectories = [
# PLANETS:
["earth", "mercury"],
["earth", "venus"],
["earth", "mars"],
["earth", "jupiter"],
["earth", "saturn"],
["earth", "uranus"],
["earth", "neptune"],
# DWARF PLANETS
["earth", "ceres"],
["earth", "eris"],
["earth", "pluto"],
["earth", "makemake"],
["earth", "haumea"],
# ASEROIDS
["earth", "vesta"],
["earth", "bernievolz"]
]
# Print the header
print(transfer_vel_header())
# And now calculate the trajectories
for traj in trajectories:
v = transfer_vel(traj[0], traj[1], None)
txt = transfer_vel_format(traj[0], traj[1], v, ",")
print(txt)
# -
| jupyter/04-interplanetary-delta-v.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Ex Notazione Polacca
# Scrivere una funzione che valuta un'espressione scritta con la notazione polacca
# https://it.wikipedia.org/wiki/Notazione_polacca
#
# Consideriamo solo operazioni binarie: +,−,∗,/
# e solo valori interi
# Scannerizzare l’espressione da destra a sinistra
#
# per ogni simbolo:
#
# se il simbolo è un operando allora mettilo in una pila
#
# se è un operatore op allora:
#
# togli il primo elemento dalla pila e salvalo in x
#
# togli il primo elemento dalla pila e salvalo in y
#
# spingi il risultato x op y nella pila
#
# ritorna la testa della pila come risultato
# Hint:
#
# Creare una funzione isoperator(elem) che valuta se elem è un operatore op = +,-,\,*. Creare una funzione comp(op,x,y) che dato un operatore op e due numeri x, y mi ritorna il risultato dell'operazione z = x op y . Creare una funzione polf(s) che scansione da destra gli elementi di s, se sono operandi li aggiunge ad una pila se sono operatori (lo valuta usando la funz isoperator(elem) ) rimuove dalla pila gli ultimi due elementi x,y e aggiunge alla pila il risultato di comp(op,x,y).
# ### Metodo per implementare un algoritmo
#
# - identificare nell'algoritmo la azioni di base da eseguire
# - per ogni azione fornire un'implementazione e testarla con l'interprete python
# - fare una funzione e testarla
# - combinare le varie implementazioni
# - testare la funzione nell'interprete di python
# ### Soluzione Esercizio notazione polacca
#
# ###### Algoritmo:
#
# - Scannerizzare l’espressione da destra a sinistra
#
# - per ogni simbolo:
#
# - se il simbolo è un operando allora mettilo in una pila
#
# - se è un operatore op allora:
#
# - togli il primo elemento dalla pila e salvalo in x
#
# - togli il primo elemento dalla pila e salvalo in y
#
# - spingi il risultato x op y nella pila
#
# - ritorna la testa della pila come risultato
#
#
# ###### Task:
#
# -1- scannerizzare l'espressione da destra a sinistra
#
# -2- verificare se il simbolo è un operatore od un operando
#
# -3- implementare le operazione di append e push e pop in python
#
# -4- dato un operatore, eseguire l'operazione associata
expr = '- * / 15 - 7 + 1 1 3 + 2 + 1 1'
l_expr=expr.split()
l_expr
# **1** - scannerizzare l'espressione da destra a sinistra
for i in range(1,len(l_expr)+1):
print(l_expr[len(l_expr)-i])
for i in range(len(l_expr)-1,-1,-1):
print(l_expr[i])
l_expr.reverse()
l_expr
# **2** - verificare se il simbolo è un operatore od un operando
def isAnOperator( x ): # ritorna True se x è ’+’,’−’,’∗’,’/’”””
return (x=='+') or (x=='-') or (x=='*') or (x=='/')
isAnOperator( '9' )
# **3** - implementare le operazione di append e push e pop in python
l_expr
l_expr[3],l_expr[5]
pila = []
pila.append(int(l_expr[3]))
pila.append(int(l_expr[5]))
pila
x = pila.pop()
y = pila.pop()
x,y
# **4** - dato un operatore, eseguire l'operazione associata
# calcola il valore dell'espressione ’x op y’,
#dove x e y sono due interi e op è un operatore tra ’+’,’−’,’∗’,’/’”””
def compute( op , x , y ):
if (op=='+'):
return x+y
elif op=='-':
return x-y
elif op=='*':
return x*y
elif op=='/':
return x/y
else:
return 0
compute('/',12,6)
# +
expr = '- * / 15 - 7 + 1 1 3 + 2 + 1 1'
l_expr=expr.split()
pila = []
for i in range(len(l_expr)-1,-1,-1):
print(l_expr[i])
if isAnOperator( l_expr[i] ):
pila.append( compute( l_expr[i] , pila.pop() , pila.pop() ) )
else:
pila.append( int(l_expr[i]) )
print(pila)
print(pila)
# +
def isAnOperator( x ):
return (x=='+') or (x=='-') or (x=='*') or (x=='/')
def compute( op , x , y ):
if (op=='+'):
return x+y
elif op=='-':
return x-y
elif op=='*':
return x*y
elif op=='/':
return x/y
else:
return 0
def evalPolishExpression( expr ):
e=expr.split()
pila = []
for i in range(len(e)-1,-1,-1):
if isAnOperator( e[i] ):
pila.append( compute( e[i] , pila.pop() , pila.pop() ) )
else:
pila.append( int(e[i]) )
return(pila.pop())
# -
evalPolishExpression( '- * / 15 - 7 + 1 1 3 + 2 + 1 1' )
| Lez11/EsercizioNotazionePolacca.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/syamkakarla98/Beginners_Guide_to_PySpark/blob/main/Beginners_Guide_to%C2%A0PySpark.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="Zb8Wv_7_nGni"
# # <font color='tomato'>Beginners Guide to PySpark</font>
# + [markdown] id="0oETwpjbnP8T"
# ## Installation
#
# To run spark in Colab, we need to first install all the dependencies in Colab environment i.e.
# * **PySpark 3.0.1**
# * **pyy4j 0.10.09**
#
# Follow the steps to install the dependencies:
# + id="6iu_Mhl6m-U1" outputId="d6e15475-e290-4f37-fbb4-b541f807cacb" colab={"base_uri": "https://localhost:8080/", "height": 235}
# !pip install pyspark==3.0.1 py4j==0.10.9
# + [markdown] id="_Y77qU2uqC-B"
# Check your installation by creatinf a spark session.
# + id="nZXXfyQ_oFfx"
from pyspark.sql import SparkSession
spark = SparkSession.builder\
.master("local[*]")\
.appName('PySpark_Tutorial')\
.getOrCreate()
# + [markdown] id="yqDy5sW3qRfJ"
# ## Reading Data
# + [markdown] id="M8Y4xl9ktpqi"
# ### Download Kaggle Movie Dataset
#
# Use the Kaggle API Token(kaggle.json) to download the Movie Dataset
# + id="LEQ2dFuUtuGv" outputId="b05196a5-d71c-415f-b573-7dca406b316d" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 73}
from google.colab import files
## Upload your kaggle json file (API Token)
files.upload()
# !mkdir ~/.kaggle
# !cp kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.json
# + id="vQ1dWtvtp8ve" outputId="55379eed-87a5-4d89-8114-acc0f22e4441" colab={"base_uri": "https://localhost:8080/", "height": 67}
# !kaggle datasets download -d dinnymathew/usstockprices
# + id="ptCSjxkrs9Ol" outputId="94855533-9099-4c2e-c74e-d4c4f7a6294c" colab={"base_uri": "https://localhost:8080/", "height": 34}
# !ls
# + id="4HDyH7xuuWoO" outputId="a246769d-cfcc-46f8-cfc3-3b0b495fbae2" colab={"base_uri": "https://localhost:8080/", "height": 50}
# !mkdir data
# !unzip usstockprices -d data
# + id="SKkKznovum2U" outputId="c48715c6-75a5-47fe-b576-9a070db84843" colab={"base_uri": "https://localhost:8080/", "height": 50}
# !ls -l data/
# + [markdown] id="QZMQsAItvF9I"
# ## Import Modules
# + id="z5elOd3_urwh"
from pyspark.sql import functions as f
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] id="6nlaWsM6veh_"
# ## Read Data
# + id="J7AR4YTjeR-6" outputId="a5dab4f8-606a-4ab4-fbf0-fb88c846aaa9" colab={"base_uri": "https://localhost:8080/", "height": 269}
# Before changing schema
b_data = spark.read.csv(
'data/stocks_price_final.csv',
sep = ',',
header = True,
)
b_data.printSchema()
# + id="dGT7jy3DDuLz"
from pyspark.sql.types import *
data_schema = [
StructField('_c0', IntegerType(), True),
StructField('symbol', StringType(), True),
StructField('data', DateType(), True),
StructField('open', DoubleType(), True),
StructField('high', DoubleType(), True),
StructField('low', DoubleType(), True),
StructField('close', DoubleType(), True),
StructField('volume', IntegerType(), True),
StructField('adjusted', DoubleType(), True),
StructField('market.cap', StringType(), True),
StructField('sector', StringType(), True),
StructField('industry', StringType(), True),
StructField('exchange', StringType(), True),
]
final_struc = StructType(fields=data_schema)
# + id="WAThgJoIvcKI"
data = spark.read.csv(
'data/stocks_price_final.csv',
sep = ',',
header = True,
schema = final_struc
)
# + id="NCFGHCsUyamR" outputId="d74d286c-fe98-40b3-8576-64a71630c0c9" colab={"base_uri": "https://localhost:8080/", "height": 269}
data.printSchema()
# + id="bUgocSgd_gqy" outputId="0ef2e836-f2f2-41c2-e671-0ae8c87c9fd2" colab={"base_uri": "https://localhost:8080/", "height": 202}
data.show(5)
# + id="us8Ks-FEHTYe"
data = data.withColumnRenamed('market.cap', 'market_cap')
# + [markdown] id="a0aFk3S-mN50"
# ## Inspect the data
# + id="heGhvrD_mSZZ" outputId="5b629e83-b8e2-49a1-b7d2-dfc3f8c568d7" colab={"base_uri": "https://localhost:8080/", "height": 54}
# prints Schema of thte data
data.schema
# + id="11e0YeMXmhUs" outputId="87939907-5797-4994-9b29-897f885ce848" colab={"base_uri": "https://localhost:8080/", "height": 235}
data.dtypes
# + id="94m1hXVPprqU" outputId="630d15a5-8b20-4b22-ca96-64542fdd0c99" colab={"base_uri": "https://localhost:8080/", "height": 87}
data.head(3)
# + id="IYZ46WYNHc01" outputId="6d4fa243-b3b4-4b5d-9f45-8d30accf1d01" colab={"base_uri": "https://localhost:8080/", "height": 202}
data.show(5)
# + id="W9AJoO7JqxWU" outputId="f1c1d7fe-aa74-4c5f-bdf9-72a04ae009b7" colab={"base_uri": "https://localhost:8080/", "height": 54}
data.first()
# + id="ea-TUnvamQvg" outputId="106304e3-98fa-4b96-8abe-015c94f2d6c0" colab={"base_uri": "https://localhost:8080/", "height": 205}
data.describe().show()
# + id="WEfAsBkRtn9W" outputId="2cb67156-45c4-4a84-8cae-f46e064d1b77" colab={"base_uri": "https://localhost:8080/", "height": 235}
data.columns
# + id="IMZZZMo_tppK" outputId="ac3c1bfa-71a9-4e98-94ff-31666476b086" colab={"base_uri": "https://localhost:8080/", "height": 34}
data.count()
# + id="JVkU70BotpxY" outputId="7e9ccad7-c103-450a-ddb3-ad6dda8bb0c0" colab={"base_uri": "https://localhost:8080/", "height": 34}
data.distinct().count()
# + id="RBq6GXzSuZIe" outputId="c726aa15-3c22-465c-a43b-8d72c6daeec3" colab={"base_uri": "https://localhost:8080/", "height": 269}
data.printSchema()
# + [markdown] id="Ww3XqquewkvB"
# ## Column Operations/Manipulations
# + id="b7NNmtMYwka1" outputId="3ed5dac0-8ebe-4b95-88bb-114a1ce212be" colab={"base_uri": "https://localhost:8080/", "height": 202}
data = data.withColumn('date', data.data)
data.show(5)
# + id="gIpRmPSNxILh" outputId="665da690-8d48-4609-894c-d2220bc50605" colab={"base_uri": "https://localhost:8080/", "height": 202}
data = data.withColumnRenamed('date', 'data_changed')
data.show(5)
# + id="ZfMRq0AwxH3R" outputId="d70a40df-dc81-4568-ed80-e82511ec9423" colab={"base_uri": "https://localhost:8080/", "height": 202}
data = data.drop('data_changed')
data.show(5)
# + id="tEGNIZJUvsbQ" outputId="26c70865-f0bb-4725-e4a8-9227917d8917" colab={"base_uri": "https://localhost:8080/", "height": 185}
data.select(['open', 'high', 'low', 'close', 'volume', 'adjusted']).describe().show()
# + id="HCnsfWXP3_6J" outputId="456b0ff7-9891-4fe3-a1f2-fac60068e8e0" colab={"base_uri": "https://localhost:8080/", "height": 302}
data.groupBy('sector').count().show()
# + id="CkyeKRraxIV3"
sec_x = data.select(['sector', 'open', 'close', 'adjusted']).groupBy('sector').mean().collect()
# + [markdown] id="UeASC7qHK9nh"
# Convert the data into **list**
# + id="ujw8raEv2sJv" outputId="3c53cbd7-3f4c-421a-b996-da288e586e10" colab={"base_uri": "https://localhost:8080/", "height": 218}
for row in sec_x:
print(list(row), end='\n')
# + [markdown] id="_9auYKi9LE3V"
# Convert the data into **dictionary**
# + id="aK9mZF24JpM9" outputId="c8677a64-3e13-4e71-f432-fd359b643eb7" colab={"base_uri": "https://localhost:8080/", "height": 218}
for row in sec_x:
print(row.asDict(), end='\n')
# + [markdown] id="BKU8_RHHLdSh"
# convert data into pandas **datafame**
# + id="YsCABx96Ljt7"
sec_df = data.select(['sector', 'open', 'close', 'adjusted']).groupBy('sector').mean().toPandas()
# + id="NGFoG1DLLpyi" outputId="a6277432-bff2-42ba-ac2a-dd352e58e954" colab={"base_uri": "https://localhost:8080/", "height": 402}
sec_df
# + id="gru1FywLL3Ih" outputId="52c1ac20-af6d-4357-a179-9d41ea85acc2" colab={"base_uri": "https://localhost:8080/", "height": 517}
sec_df.plot(kind = 'bar', x='sector', y = sec_df.columns.tolist()[1:], figsize=(12, 6))
# + [markdown] id="BA_lWkHsNKGc"
# Remove **basic industries** from the plot and view it again...
# + id="dT6IoKDTML4l" outputId="4fbe465e-33e1-4400-fb31-7354f8d98f83" colab={"base_uri": "https://localhost:8080/", "height": 500}
ind = list(range(12))
ind.pop(6)
sec_df.iloc[ind ,:].plot(kind = 'bar', x='sector', y = sec_df.columns.tolist()[1:], figsize=(12, 6), ylabel = 'Stock Price', xlabel = 'Sector')
plt.show()
# + [markdown] id="TEwN2D1lKbi7"
#
# + id="lXbvY4Ry6M7O" outputId="9b81227e-7b05-4d2e-e424-731b7ff017e9" colab={"base_uri": "https://localhost:8080/", "height": 195}
industries_x = data.select(['industry', 'open', 'close', 'adjusted']).groupBy('industry').mean().toPandas()
industries_x.head()
# + id="oOc73jyIFEFl" outputId="9912cab7-8acb-464f-937f-22af810d1440" colab={"base_uri": "https://localhost:8080/", "height": 1000}
industries_x.plot(kind = 'barh', x='industry', y = industries_x.columns.tolist()[1:], figsize=(10, 50))
# + [markdown] id="NV_3Pz2yaYLf"
# Remove **major chemicals** and **building products** to view the rest data clearly
# + id="mYKkpXL_FRa3" outputId="06885893-165a-4866-f5ca-28bc7009b93a" colab={"base_uri": "https://localhost:8080/", "height": 1000}
q = industries_x[(industries_x.industry != 'Major Chemicals') & (industries_x.industry != 'Building Products')]
q.plot(kind = 'barh', x='industry', y = q.columns.tolist()[1:], figsize=(10, 50), xlabel='Stock Price', ylabel = 'Industry')
plt.show()
# + id="HgEJ2d2CHo1-" outputId="f9dc199e-cf13-40bb-d1c4-3cf151d2a1d4" colab={"base_uri": "https://localhost:8080/", "height": 454}
import pyspark.sql.functions as f
health = data.filter(f.col('sector') == 'Health Care')
health.show()
# + [markdown] id="EfxSK3OSbifQ"
# ### How to use Aggregation
# + id="hYcg9S7eH1DG" outputId="02a8da30-bb32-45fa-d952-8956670168c8" colab={"base_uri": "https://localhost:8080/", "height": 322}
from pyspark.sql.functions import col, min, max, avg, lit
data.groupBy("sector") \
.agg(min("data").alias("From"),
max("data").alias("To"),
min("open").alias("Minimum Opening"),
max("open").alias("Maximum Opening"),
avg("open").alias("Average Opening"),
min("close").alias("Minimum Closing"),
max("close").alias("Maximum Closing"),
avg("close").alias("Average Closing"),
min("adjusted").alias("Minimum Adjusted Closing"),
max("adjusted").alias("Maximum Adjusted Closing"),
avg("adjusted").alias("Average Adjusted Closing"),
).show(truncate=False)
# + [markdown] id="wnEqe6C5ivLN"
# Get the min, max, avg data w.r.t sectors from **Jan 2019** to **Jan 2020**
# + id="texK2jlleAgi" outputId="794cc28a-d427-45a4-f1b7-2790cbcf86d2" colab={"base_uri": "https://localhost:8080/", "height": 322}
data.filter( (col('data') >= lit('2019-01-02')) & (col('data') <= lit('2020-01-31')) )\
.groupBy("sector") \
.agg(min("data").alias("From"),
max("data").alias("To"),
min("open").alias("Minimum Opening"),
max("open").alias("Maximum Opening"),
avg("open").alias("Average Opening"),
min("close").alias("Minimum Closing"),
max("close").alias("Maximum Closing"),
avg("close").alias("Average Closing"),
min("adjusted").alias("Minimum Adjusted Closing"),
max("adjusted").alias("Maximum Adjusted Closing"),
avg("adjusted").alias("Average Adjusted Closing"),
).show(truncate=False)
# + [markdown] id="b9DRUiPLk2IT"
# Plot the timeseries data od **technology** sector stock trade
# + id="61klpU8Lh0YO" outputId="e3c7fcd8-1437-4138-87b3-0c6bce9d13c6" colab={"base_uri": "https://localhost:8080/", "height": 454}
tech = data.where(col('sector') == 'Technology').select('data', 'open', 'close', 'adjusted')
tech.show()
# + id="Wj5p8ixQkjuT" outputId="3368021d-add1-4735-c996-b9626cc61d00" colab={"base_uri": "https://localhost:8080/", "height": 713}
fig, axes = plt.subplots(nrows=3, ncols=1, figsize =(60, 30))
tech.toPandas().plot(kind = 'line', x = 'data', y='open', xlabel = 'Date Range', ylabel = 'Stock Opening Price', ax = axes[0], color = 'mediumspringgreen')
tech.toPandas().plot(kind = 'line', x = 'data', y='close', xlabel = 'Date Range', ylabel = 'Stock Closing Price', ax = axes[1], color = 'tomato')
tech.toPandas().plot(kind = 'line', x = 'data', y='adjusted', xlabel = 'Date Range', ylabel = 'Stock Adjusted Price', ax = axes[2], color = 'orange')
plt.show()
# + id="GRpy_6Fn29rJ" outputId="4870da3e-6c78-4dc3-aa12-1ce4cb1c543f" colab={"base_uri": "https://localhost:8080/", "height": 202}
data.select('sector').show(5)
# + id="FTlxRGkw6jCu" outputId="43028000-9185-4b58-ed0d-30f9e5bd91dd" colab={"base_uri": "https://localhost:8080/", "height": 202}
data.select(['open', 'close', 'adjusted']).show(5)
# + id="EsZfCrIZ6qkd" outputId="f1a63443-ed6c-432c-b220-81283a574ee5" colab={"base_uri": "https://localhost:8080/", "height": 202}
data.filter(data.adjusted.between(100.0, 500.0)).show(5)
# + id="IYWm7iDy8dHC" outputId="df39d2c1-b00c-4b47-af0a-4afb44b1382d" colab={"base_uri": "https://localhost:8080/", "height": 202}
from pyspark.sql.functions import col, lit
data.filter( (col('data') >= lit('2020-01-01')) & (col('data') <= lit('2020-01-31')) ).show(5)
# + id="i5LKNfTGAEl0" outputId="1c9066c5-e8ab-4447-e236-0582bcad6eb1" colab={"base_uri": "https://localhost:8080/", "height": 202}
data.select('open', 'close', f.when(data.adjusted >= 200.0, 1).otherwise(0)).show(5)
# + id="exczz-BFBd9N" outputId="43641b12-f454-4d4c-b64f-f945117faf01" colab={"base_uri": "https://localhost:8080/", "height": 302}
data.select('sector',
data.sector.rlike('^[B,C]').alias('Sector Starting with B or C')
).distinct().show()
# + id="3MBAUrelDvBO" outputId="1e0e48c0-fdb1-4f57-de76-8db6350a17eb" colab={"base_uri": "https://localhost:8080/", "height": 454}
data.select(['industry', 'open', 'close', 'adjusted']).groupBy('industry').mean().show()
# + id="zkSdYUdRGkkk"
| Beginners_Guide_to PySpark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# URL: http://matplotlib.org/examples/mplot3d/trisurf3d_demo.html
#
# Most examples work across multiple plotting backends, this example is also available for:
#
# * [Plotly - Triangular 3D surfaces](../plotly/trisurf3d_demo.ipynb)
# ## HoloViews
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
# %output backend='matplotlib' size=200
# ### Define data
# +
n_radii = 8
n_angles = 36
# Make radii and angles spaces (radius r=0 omitted to eliminate duplication).
radii = np.linspace(0.125, 1.0, n_radii)
angles = np.linspace(0, 2*np.pi, n_angles, endpoint=False)
# Repeat all angles for each radius.
angles = np.repeat(angles[..., np.newaxis], n_radii, axis=1)
# Convert polar (radii, angles) coords to cartesian (x, y) coords.
# (0, 0) is manually added at this stage, so there will be no duplicate
# points in the (x, y) plane.
x = np.append(0, (radii*np.cos(angles)).flatten())
y = np.append(0, (radii*np.sin(angles)).flatten())
# Compute z to make the pringle surface.
z = np.sin(-x*y)
trisurface = hv.TriSurface((x, y, z))
# -
# ## Plot
trisurface
| examples/gallery/demos/matplotlib/trisurf3d_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Overview
# - nb022の改良
# - focal loss の最適化を行なう
# - nb019のfolcal lossを使う
# - top8を除く
# gitのhash
import subprocess
cmd = "git rev-parse --short HEAD"
hash = subprocess.check_output(cmd.split()).strip().decode('utf-8')
print(hash)
# # Const
# +
# basic
NB = '023'
DEBUG = False
isPI = False
isShowLog = False
PATH_TRAIN = '../data_ignore/input/train_features.csv'
PATH_TRAIN_SCORED = '../data_ignore/input/train_targets_scored.csv'
PATH_TRAIN_NONSCORED = '../data_ignore/input/train_targets_nonscored.csv'
PATH_SUB = '../data_ignore/input/sample_submission.csv'
PATH_TEST = '../data_ignore/input/test_features.csv'
SAVE_DIR = f'../data_ignore/output_nb/nb{NB}/'
PATH_DRUGID = '../data_ignore/input/train_drug.csv'
PATH_GROUP696 = './../data_ignore/output_nb/nb004/group.csv'
PATH_ESTIMATED_LOGLOSS = './../data_ignore/output_nb/nb017/estimated_logloss.csv'
TOP8_DRUG = ['87d714366', '9f80f3f77', '8b87a7a83', '5628cb3ee', 'd08af5d4b', '292ab2c28', 'd50f18348', 'd1b47f29d']
# -
settings_str = """
globals:
seed: 2020
device: cuda
num_epochs: 80
dataset:
name:
params:
split:
name: MultiStratifiedKFold
params:
n_splits: 5
random_state: 42
shuffle: True
loader:
train:
batch_size: 512
shuffle: True
num_workers: 10
pin_memory: True
drop_last: True
val:
batch_size: 512
shuffle: False
num_workers: 10
pin_memory: True
drop_last: False
model:
name:
params:
loss:
name: SmoothLogitsLoss
params: {}
optimizer:
name: Adam
params:
lr: 0.005
scheduler:
name: CosineAnnealingLR
params:
T_max: 10
"""
# # Import everything I need :)
# +
import os
import time
import yaml
import random
import numpy as np
import pandas as pd
from glob import glob
from pdb import set_trace as st
from fastprogress import progress_bar
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import log_loss
from sklearn.model_selection import KFold
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
from torch.nn.modules.loss import _WeightedLoss
from torch.utils.data import Dataset, DataLoader
# -
import warnings
warnings.filterwarnings('ignore')
# # My func
# +
def preprocess(df_):
df = df_.copy()
df.loc[:, 'cp_type'] = df.loc[:, 'cp_type'].map({'trt_cp': 0, 'ctl_vehicle': 1})
df.loc[:, 'cp_dose'] = df.loc[:, 'cp_dose'].map({'D1': 0, 'D2': 1})
# df.loc[:, 'cp_time'] = df.loc[:, 'cp_time'].map({24: 0, 48: 1, 72: 2})
del df['sig_id']
return df
def remove_ctl_cp(features_, target_):
features = features_.copy()
target = target_.copy()
# bools = features['cp_type'] != 'ctl_vehicle'
bools = features['cp_type'] != 1
features = features[bools].reset_index(drop=True)
features = features.drop(['cp_type'], axis=1).values
target = target[bools].reset_index(drop=True).values
return features, target
def add_ctl_cp_oof(oof):
oof_new = np.zeros_like(train_targets).astype(float)
bools = train_features['cp_type'] != 'ctl_vehicle'
oof_new[bools, :] = oof
return oof_new
def seed_everything(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
class permutation_importance():
def __init__(self, model, metric):
self.is_computed = False
self.n_feat = 0
self.base_score = 0
self.model = model
self.metric = metric
self.df_result = []
def compute(self, _X_valid, y_valid):
X_valid = pd.DataFrame(_X_valid, columns=FEAT_COLUMNS)
self.n_feat = len(X_valid.columns)
val_set = MoaDataset(_X_valid, y_valid, mode='train')
dataloaders = {'val': DataLoader(val_set, **settings['loader']['val'])}
y_valid_pred = get_epoch_pred(self.model, device, dataloaders['val'])
self.base_score = self.metric(y_valid, y_valid_pred)
self.df_result = pd.DataFrame({'feat': X_valid.columns,
'score': np.zeros(self.n_feat),
'score_diff': np.zeros(self.n_feat)})
# predict
for i, col in enumerate(progress_bar(X_valid.columns)):
df_perm = X_valid.copy()
np.random.seed(1)
df_perm[col] = np.random.permutation(df_perm[col])
# y_valid_pred = self.model.predict(df_perm)
val_set = MoaDataset(df_perm.values, y_valid, mode='train')
dataloaders = {'val': DataLoader(val_set, **settings['loader']['val'])}
y_valid_pred = get_epoch_pred(self.model, device, dataloaders['val'])
score = self.metric(y_valid, y_valid_pred)
self.df_result['score'][self.df_result['feat']==col] = score
self.df_result['score_diff'][self.df_result['feat']==col] = self.base_score - score
self.is_computed = True
def get_negative_feature(self):
assert self.is_computed!=False, 'compute メソッドが実行されていません'
idx = self.df_result['score_diff'] < 0
return self.df_result.loc[idx, 'feat'].values.tolist()
def get_positive_feature(self):
assert self.is_computed!=False, 'compute メソッドが実行されていません'
idx = self.df_result['score_diff'] > 0
return self.df_result.loc[idx, 'feat'].values.tolist()
def show_permutation_importance(self, score_type='loss'):
'''score_type = 'loss' or 'accuracy' '''
assert self.is_computed!=False, 'compute メソッドが実行されていません'
if score_type=='loss':
ascending = True
elif score_type=='accuracy':
ascending = False
else:
ascending = ''
plt.figure(figsize=(15, int(0.25*self.n_feat)))
sns.barplot(x="score_diff", y="feat", data=self.df_result.sort_values(by="score_diff", ascending=ascending))
plt.title('base_score - permutation_score')
def get_not_drug_leak_folds(n_splits, train_features, train_drug, gruoup696):
'''
n_splits だけfoldを作成する。
ただし、cp_type = ctl_vehicle と、top8にはfold=-1を割り振っている。
696group のcsv: https://www.kaggle.com/fkubota/moa-nb004-696group
::example::
train_features = pd.read_csv("train_features.csv")
train_drug = pd.read_csv("train_drug.csv")
group696 = pd.read_csv("MoA_nb004_696group/group.csv")
df_fold = get_not_drug_leak_folds(5, train_features, train_drug, group696)
'''
TOP8_DRUG = ['87d714366', '9f80f3f77', '8b87a7a83', '5628cb3ee', 'd08af5d4b', '292ab2c28', 'd50f18348', 'd1b47f29d']
mask_trt = (train_features['cp_type'] == 'trt_cp').values
# mask_top8 を作成
mask_top8 = []
for drug_id in train_drug.drug_id.values:
if drug_id in TOP8_DRUG:
mask_top8.append(True)
else:
mask_top8.append(False)
mask_top8 = np.array(mask_top8)
# trt かつ top8 以外を抜き出す
# group = 0 は要素数が多いので一番最後にやるようにする
drug_groups = group696[mask_trt & ~mask_top8].group.values
groups = np.sort(group696[mask_trt & ~mask_top8].group.unique())
groups = groups[1:]
groups = np.append(groups, 0)
# 各グループにfoldを割り振る
tile = []
train_drug_trt = train_drug[mask_trt & ~mask_top8]
train_drug_trt['fold'] = -1
for i_grp, grp in enumerate(groups):
if i_grp == 0:
tile = np.arange(1, n_splits+1).astype(int)
mask_grp = drug_groups == grp
drug_rank = train_drug[mask_trt & ~mask_top8][mask_grp].drug_id.value_counts()
n_repeat = np.ceil(len(drug_rank)/n_splits).astype(int)
folds = np.tile(tile, n_repeat)[:len(drug_rank)]
for i, drug_id in enumerate(drug_rank.index.sort_values()):
mask = train_drug_trt.drug_id.values == drug_id
train_drug_trt.fold[mask] = folds[i]
tile = train_drug_trt.fold.value_counts()[::-1][:n_splits].index
train_drug_fold = train_drug.copy()
train_drug_fold['fold'] = -1
train_drug_fold['fold'][mask_trt & ~mask_top8] = train_drug_trt.fold.values
return train_drug_fold
# -
class MoaModel(nn.Module):
def __init__(self, n_input, n_output):
super(MoaModel, self).__init__()
self.batch_norm1 = nn.BatchNorm1d(n_input)
self.dropout1 = nn.Dropout(0.2)
self.dense1 = nn.utils.weight_norm(nn.Linear(n_input, 2048))
self.batch_norm2 = nn.BatchNorm1d(2048)
self.dropout2 = nn.Dropout(0.5)
self.dense2 = nn.utils.weight_norm(nn.Linear(2048, 1048))
self.batch_norm3 = nn.BatchNorm1d(1048)
self.dropout3 = nn.Dropout(0.5)
# self.dense3 = nn.utils.weight_norm(nn.Linear(1048, 206))
self.dense3 = nn.utils.weight_norm(nn.Linear(1048, n_output))
def forward(self, x):
x = self.batch_norm1(x)
x = self.dropout1(x)
x = F.relu(self.dense1(x))
x = self.batch_norm2(x)
x = self.dropout2(x)
x = F.relu(self.dense2(x))
x = self.batch_norm3(x)
x = self.dropout3(x)
x_raw = self.dense3(x)
x_sigmoid = F.sigmoid(x_raw)
return x_sigmoid, x_raw
class MoaDataset(Dataset):
def __init__(self, df, targets, mode):
self.mode = mode
self.df = df
# self.targets = targets
if mode=='train':
self.targets = targets
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
if self.mode == 'train':
return torch.FloatTensor(self.df[idx]), torch.FloatTensor(self.targets[idx])
elif self.mode == 'val':
return torch.FloatTensor(self.df[idx]), 0
# +
def mean_log_loss(y_true, y_pred):
metrics = []
# for i in range(y_true.shape[1]):
# metrics.append(log_loss(y_true[:, i], y_pred[:, i].astype(float), labels=[0,1]))
# return np.mean(metrics)
y_true = y_true.astype(np.float64).ravel()
y_pred = y_pred.astype(np.float64).ravel()
return log_loss(y_true, y_pred, labels=[0, 1])
class SmoothBCEwLogits(_WeightedLoss):
def __init__(self, weight=None, reduction='mean', smoothing=0.001):
super().__init__(weight=weight, reduction=reduction)
self.smoothing = smoothing
self.weight = weight
self.reduction = reduction
@staticmethod
def _smooth(targets:torch.Tensor, n_labels:int, smoothing=0.0):
assert 0 <= smoothing < 1
with torch.no_grad():
targets = targets * (1.0 - smoothing) + 0.5 * smoothing
return targets
def forward(self, inputs, targets):
targets = SmoothBCEwLogits._smooth(targets, inputs.size(-1),
self.smoothing)
loss = F.binary_cross_entropy_with_logits(inputs, targets,self.weight)
if self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
class FocalLoss2d(nn.modules.loss._WeightedLoss):
'''
https://github.com/andrijdavid/FocalLoss/blob/master/focalloss.py
'''
def __init__(self, gamma=2, weight=None, size_average=None, ignore_index=-100,
reduce=None, reduction='mean', balance_param=0.25):
super(FocalLoss2d, self).__init__(weight, size_average, reduce, reduction)
self.gamma = gamma
self.weight = weight
self.size_average = size_average
self.ignore_index = ignore_index
self.balance_param = balance_param
def forward(self, input, target):
# inputs and targets are assumed to be BatchxClasses
assert len(input.shape) == len(target.shape)
assert input.size(0) == target.size(0)
assert input.size(1) == target.size(1)
weight = Variable(self.weight)
# compute the negative likelyhood
logpt = - F.binary_cross_entropy_with_logits(input, target, pos_weight=self.weight, reduction=self.reduction)
# logpt = - F.binary_cross_entropy_with_logits(input, target, reduction=self.reduction)
pt = torch.exp(logpt)
# compute the loss
focal_loss = -( (1-pt)**self.gamma ) * logpt
balanced_focal_loss = self.balance_param * focal_loss
return balanced_focal_loss
# -
class EarlyStopping:
"""
Early stops the training if validation loss doesn't improve after a given patience.
https://github.com/Bjarten/early-stopping-pytorch/blob/master/pytorchtools.py
"""
def __init__(self, patience=7, verbose=False, delta=0, path='checkpoint.pt', trace_func=print):
"""
Args:
patience (int): How long to wait after last time validation loss improved.
Default: 7
verbose (bool): If True, prints a message for each validation loss improvement.
Default: False
delta (float): Minimum change in the monitored quantity to qualify as an improvement.
Default: 0
path (str): Path for the checkpoint to be saved to.
Default: 'checkpoint.pt'
trace_func (function): trace print function.
Default: print
"""
self.patience = patience
self.verbose = verbose
self.counter = 0
self.best_score = None
self.early_stop = False
self.val_loss_min = np.Inf
self.delta = delta
self.path = path
self.trace_func = trace_func
# self.best_state_dict = {}
def __call__(self, val_loss, model):
score = -val_loss
if self.best_score is None:
self.best_score = score
self.save_checkpoint(val_loss, model)
elif score < self.best_score + self.delta:
self.counter += 1
if self.verbose:
self.trace_func(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
else:
self.best_score = score
self.save_checkpoint(val_loss, model)
self.counter = 0
def save_checkpoint(self, val_loss, model):
'''Saves model when validation loss decrease.'''
if self.verbose:
self.trace_func(f'Validation loss decreased ({self.val_loss_min:.6f} --> {val_loss:.6f}). Saving model ...')
# if not DEBUG:
torch.save(model.state_dict(), self.path)
# self.best_state_dict = model.state_dict()
self.val_loss_min = val_loss
# +
def train_model(model, device, train_loader, optimizer, scheduler, criterion):
model.train()
running_loss = 0.0
for i, (x, y) in enumerate(train_loader):
x, y = x.to(device), y.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(True):
pred_sigmoid, pred_raw = model(x)
loss = criterion(pred_raw, y)
loss.backward()
optimizer.step()
running_loss += loss.item() / len(train_loader)
scheduler.step()
return running_loss
def get_epoch_loss_score(model, device, valid_loader, criterion, optimizer):
model.eval()
running_loss = 0.0
targets = []
preds = []
for i, (x, y) in enumerate(valid_loader):
x, y = x.to(device), y.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(False):
pred_sigmoid, pred_raw = model(x)
loss = criterion(pred_raw, y)
running_loss += loss.item() / len(valid_loader)
targets.append(y)
preds.append(pred_sigmoid)
targets = torch.cat(targets, dim=0).cpu().numpy()
preds = torch.cat(preds, dim=0).cpu().numpy()
_mean_log_loss = mean_log_loss(targets, preds)
return running_loss, _mean_log_loss, preds
def get_epoch_pred(model, device, valid_loader):
model.eval()
targets = []
preds = []
for i, (x, y) in enumerate(valid_loader):
x, y = x.to(device), y.to(device)
with torch.set_grad_enabled(False):
pred_sigmoid, pred_raw = model(x)
targets.append(y)
preds.append(pred_sigmoid)
targets = torch.cat(targets, dim=0).cpu().numpy()
preds = torch.cat(preds, dim=0).cpu().numpy()
return preds
# +
def run_fold(dataloaders, shape, checkpoint_path, ModelClass, show_log=True):
device = torch.device("cuda")
model = ModelClass(shape[0], shape[1]).to(device)
# model = ModelClass(train.shape[1], ).to(device)
early_stopping = EarlyStopping(patience=15, verbose=show_log, path=checkpoint_path)
optimizer = optim.__getattribute__(settings['optimizer']['name'])(
model.parameters(), **settings['optimizer']['params'])
scheduler = optim.lr_scheduler.__getattribute__(settings['scheduler']['name'])(
optimizer, **settings['scheduler']['params'])
best_valid_loss = np.inf
best_mean_log_loss = np.inf
best_preds = 0
val_losses = []
trn_losses = []
for epoch in range(n_epochs):
train_loss = train_model(model, device, dataloaders['train'], optimizer, scheduler, criterion)
valid_loss, _mean_log_loss, preds = get_epoch_loss_score(model, device, dataloaders['val'], criterion, optimizer)
trn_losses.append(train_loss)
val_losses.append(valid_loss)
if show_log:
print(f"Epoch {str(epoch+1).zfill(2)}/{n_epochs } loss: {train_loss:5.5f} val_loss: {valid_loss:5.5f} mean_log_loss: {_mean_log_loss:5.5f}")
early_stopping(valid_loss, model)
if early_stopping.early_stop:
print("Early stopping")
break
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
best_mean_log_loss = _mean_log_loss
best_preds = preds
return best_mean_log_loss, best_preds, trn_losses, val_losses
def run(splitter, train, targets, ModelClass, show_log=True, pi=False):
mean_log_loss_list = []
oof = np.zeros_like(targets).astype(float)
df_pi = pd.DataFrame(columns=['feat', 'score_diff'])
for n, (idx_trn, idx_val) in enumerate(splitter.split(train, targets)):
print('-'*100)
print(f':: start fold {n+1}/{n_splits} at {time.ctime()} ::')
print('-'*100)
X_trn, X_val = train[idx_trn], train[idx_val]
y_trn, y_val = targets[idx_trn], targets[idx_val]
train_set = MoaDataset(X_trn, y_trn, mode='train')
val_set = MoaDataset(X_val, y_val, mode='train')
dataloaders = {
'train': DataLoader(train_set, **settings['loader']['train']),
'val': DataLoader(val_set, **settings['loader']['val']),
}
checkpoint_path = f'{SAVE_DIR}Fold{n+1}of{n_splits}.pt'
shape = (X_trn.shape[1], y_trn.shape[1])
best_mean_log_loss, best_preds, trn_losses, val_losses = run_fold(dataloaders, shape, checkpoint_path, ModelClass, show_log=show_log)
# result
print(f':: best mean_log_loss: {best_mean_log_loss:5.5f} ::')
mean_log_loss_list.append(best_mean_log_loss)
oof[idx_val, :] = best_preds
# permutation importance
if pi:
device = torch.device("cuda")
model = ModelClass(shape[0], shape[1]).to(device)
state_dict = torch.load(checkpoint_path)
model.load_state_dict(state_dict)
model.to(device)
model.eval()
pi = permutation_importance(model, mean_log_loss) # model と metric を渡す
pi.compute(X_val, y_val)
pi_result = pi.df_result
df_pi = pd.concat([df_pi, pi_result[['feat', 'score_diff']]])
# pi.show_permutation_importance(score_type='loss')
# plot
if show_log:
x = np.arange(1, len(trn_losses)+1)
plt.figure(figsize=(12, 7))
plt.plot(x[1:], trn_losses[1:], '--.', label='train')
plt.plot(x[1:], val_losses[1:], '--.', label='valid')
plt.title(f"fold{n+1}/{n_splits} {settings['loss']['name']}")
plt.legend()
plt.show()
print('\n')
if pi:
# permutation score
plt.figure(figsize=(15, int(0.25*len(FEAT_COLUMNS))))
order = df_pi.groupby(["feat"]).mean()['score_diff'].reset_index().sort_values('score_diff', ascending=True)
sns.barplot(x="score_diff", y="feat", data=df_pi, order=order['feat'])
plt.title('base_score - permutation_score')
plt.show()
return mean_log_loss_list, oof, df_pi
def run_not_drug_leak(df_fold, train, targets, ModelClass, show_log=True, pi=False):
mean_log_loss_list = []
oof = np.zeros_like(targets).astype(float)
df_pi = pd.DataFrame(columns=['feat', 'score_diff'])
# for n, (idx_trn, idx_val) in enumerate(splitter.split(train, targets)):
for n, fold_i in enumerate(df_fold['fold'].unique()):
print('-'*100)
print(f':: start fold {n+1}/{n_splits} at {time.ctime()} ::')
print('-'*100)
mask_fold = df_fold.fold == fold_i
X_trn, X_val = train[~mask_fold], train[mask_fold]
y_trn, y_val = targets[~mask_fold], targets[mask_fold]
train_set = MoaDataset(X_trn, y_trn, mode='train')
val_set = MoaDataset(X_val, y_val, mode='train')
dataloaders = {
'train': DataLoader(train_set, **settings['loader']['train']),
'val': DataLoader(val_set, **settings['loader']['val']),
}
checkpoint_path = f'{SAVE_DIR}Fold{n+1}of{n_splits}.pt'
shape = (X_trn.shape[1], y_trn.shape[1])
best_mean_log_loss, best_preds, trn_losses, val_losses = run_fold(dataloaders, shape, checkpoint_path, ModelClass, show_log=show_log)
# result
print(f':: best mean_log_loss: {best_mean_log_loss:5.5f} ::')
mean_log_loss_list.append(best_mean_log_loss)
# oof[idx_val, :] = best_preds
oof[mask_fold, :] = best_preds
# permutation importance
if pi:
device = torch.device("cuda")
model = ModelClass(shape[0], shape[1]).to(device)
state_dict = torch.load(checkpoint_path)
model.load_state_dict(state_dict)
model.to(device)
model.eval()
pi = permutation_importance(model, mean_log_loss) # model と metric を渡す
pi.compute(X_val, y_val)
pi_result = pi.df_result
df_pi = pd.concat([df_pi, pi_result[['feat', 'score_diff']]])
# pi.show_permutation_importance(score_type='loss')
# plot
if show_log:
x = np.arange(1, len(trn_losses)+1)
plt.figure(figsize=(12, 7))
plt.plot(x[1:], trn_losses[1:], '--.', label='train')
plt.plot(x[1:], val_losses[1:], '--.', label='valid')
plt.title(f"fold{n+1}/{n_splits} {settings['loss']['name']}")
plt.legend()
plt.show()
print('\n')
if pi:
# permutation score
plt.figure(figsize=(15, int(0.25*len(FEAT_COLUMNS))))
order = df_pi.groupby(["feat"]).mean()['score_diff'].reset_index().sort_values('score_diff', ascending=True)
sns.barplot(x="score_diff", y="feat", data=df_pi, order=order['feat'])
plt.title('base_score - permutation_score')
plt.show()
return mean_log_loss_list, oof, df_pi
# -
# # Preparation
# set
# +
settings = yaml.safe_load(settings_str)
seed_everything(settings['globals']['seed'])
sns.set()
sns.set_context('talk')
if not os.path.exists(SAVE_DIR):
os.makedirs(SAVE_DIR)
# -
if DEBUG:
settings['split']['params']['n_splits'] = 2
settings['globals']['num_epochs'] = 3
# <br>
#
# load dataset
# +
train_features = pd.read_csv(PATH_TRAIN)
train_targets = pd.read_csv(PATH_TRAIN_SCORED)
# test_features = pd.read_csv(PATH_TEST)
train_drug = pd.read_csv(PATH_DRUGID)
group696 = pd.read_csv(PATH_GROUP696)
# ss = pd.read_csv(PATH_SUB)
# -
# mask_top8 を作成
mask_top8 = []
for drug_id in train_drug.drug_id.values:
if drug_id in TOP8_DRUG:
mask_top8.append(True)
else:
mask_top8.append(False)
mask_top8 = np.array(mask_top8)
# +
end_col = 10
step_row = 11
if DEBUG:
print(':: debug mode ::')
train_features = train_features.iloc[::step_row, :end_col].reset_index(drop=True)
train_targets = train_targets.iloc[::step_row, :].reset_index(drop=True)
mask_top8 = mask_top8[::step_row]
train_drug = train_drug.iloc[::step_row, :].reset_index(drop=True)
group696 = group696.iloc[::step_row, :].reset_index(drop=True)
# test_features = test_features.iloc[::100, :]
# -
# <br>
#
# preprocess
# +
mask_trt = (train_features['cp_type'] == 'trt_cp').values
train = preprocess(train_features)
FEAT_COLUMNS = train_features.columns[2:]
# test = preprocess(test_features).values
del train_targets['sig_id']
target_cols = [col for col in train_targets.columns]
train, targets = remove_ctl_cp(train, train_targets)
# train_targets = train_targets.loc[train['cp_type']==0].reset_index(drop=True).values
# train = train.loc[train['cp_type']==0].reset_index(drop=True).values
# -
print(f'train shape: {train.shape}')
# print(f'test shape: {test.shape}')
print(f'train_targets shape: {targets.shape}')
# <br>
#
# fold分割
# %%time
df_fold = get_not_drug_leak_folds(settings['split']['params']['n_splits'], train_features, train_drug, group696)
splitter = KFold(n_splits=settings['split']['params']['n_splits'], random_state=1, shuffle=True)
for top8_i in range(len(TOP8_DRUG)):
mask_drug = df_fold['drug_id'] == TOP8_DRUG[top8_i]
for fold_i, (train_idx, valid_idx) in enumerate(splitter.split(df_fold[mask_drug])):
# df_fold[['fold']][mask_drug].iloc[valid_idx, :] = fold_i + 1
# df_fold[['fold']][mask_drug] = fold_i + 1
_df_fold = df_fold[mask_drug]
_df_fold.fold.values[valid_idx] = fold_i + 1
df_fold.fold[mask_drug] = _df_fold.fold.values
print(df_fold.fold.unique())
print(df_fold[mask_trt].fold.unique())
# <br>
#
# top8の除去
train = train[~mask_top8[mask_trt]]
targets = targets[~mask_top8[mask_trt]]
df_fold = df_fold[mask_trt & ~mask_top8].reset_index(drop=True)
# # Create model
n_splits = settings['split']['params']['n_splits']
n_epochs = settings['globals']['num_epochs']
splitter = MultilabelStratifiedKFold(**settings['split']['params'])
device = settings['globals']['device']
# criterion = criterion_ = nn.__getattribute__(
# settings['loss']['name'])(**settings['loss']['params'])
criterion = SmoothBCEwLogits(**settings['loss']['params'], smoothing=0)
# n_logspace = 5
n_logspace = 20
logspace = np.logspace(-2, 0.7, n_logspace)
logspace = np.hstack((np.array([0]), logspace))
logspace
# +
# %%time
# mean_log_loss_list, _oof, df_pi = run(splitter, train, targets, MoaModel, show_log=isShowLog, pi=isPI)
list_oof_score = []
for gamma in progress_bar(logspace):
print(f'smoothing: {gamma}')
save_path = f'{SAVE_DIR}oof_focal_loss_gamma_{gamma:.6f}.csv'
print(f'save: {save_path}')
criterion = FocalLoss2d(gamma=gamma)
mean_log_loss_list, _oof, df_pi = run_not_drug_leak(df_fold, train, targets, MoaModel,
show_log=isShowLog, pi=isPI)
oof_score = mean_log_loss(targets, _oof)
list_oof_score.append(oof_score)
# save oof
df_oof = pd.DataFrame(_oof, columns=target_cols)
df_oof.to_csv(save_path, index=False)
df_gamma = pd.DataFrame()
df_gamma['gamma'] = logspace
df_gamma['oof_score'] = list_oof_score
# -
# # Result
df_gamma
plt.axhline(df_gamma.oof_score[0], color='r', label='gamma=0')
plt.plot(df_gamma.gamma[1:], df_gamma.oof_score[1:], '.', label=None)
plt.xscale('log')
plt.xlabel('gamma')
plt.ylabel('oof_score')
plt.legend()
plt.axhline(df_gamma.oof_score[0], color='r', label='gamma=0')
plt.plot(df_gamma.gamma[1:15], df_gamma.oof_score[1:15], '.', label=None)
plt.xscale('log')
plt.xlabel('gamma')
plt.ylabel('oof_score')
plt.legend()
idx = df_gamma.oof_score.argmin()
df_gamma.gamma[idx]
# + active=""
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
| nb/023_optimize_focal_loss.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tokenizer App
# <div style="position: absolute; right:0;top:0"><a href="./tokenizer.ipynb" style="text-decoration: none"> <font size="5">←</font></a>
# <a href="../evaluation.py.ipynb" style="text-decoration: none"> <font size="5">↑</font></a></div>
#
# Chose settings and run Tokenizer
# +
from __init__ import init_vars
init_vars(vars())
from tokenizer.widgets import token_app
token_app()
| tomef/tokenizer/tokenizer.app.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + deletable=true editable=true
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# + [markdown] deletable=true editable=true
# # Pipelining estimators
# + [markdown] deletable=true editable=true
# In this section we study how different estimators maybe be chained.
# + [markdown] deletable=true editable=true
# ## A simple example: feature extraction and selection before an estimator
# + [markdown] deletable=true editable=true
# ### Feature extraction: vectorizer
# + [markdown] deletable=true editable=true
# For some types of data, for instance text data, a feature extraction step must be applied to convert it to numerical features.
# To illustrate we load the SMS spam dataset we used earlier.
# + deletable=true editable=true
import os
with open(os.path.join("datasets", "smsspam", "SMSSpamCollection")) as f:
lines = [line.strip().split("\t") for line in f.readlines()]
text = [x[1] for x in lines]
y = [x[0] == "ham" for x in lines]
# + deletable=true editable=true
from sklearn.model_selection import train_test_split
text_train, text_test, y_train, y_test = train_test_split(text, y)
# + [markdown] deletable=true editable=true
# Previously, we applied the feature extraction manually, like so:
# + deletable=true editable=true
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
# + [markdown] deletable=true editable=true
# The situation where we learn a transformation and then apply it to the test data is very common in machine learning.
# Therefore scikit-learn has a shortcut for this, called pipelines:
# + deletable=true editable=true
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
pipeline.fit(text_train, y_train)
pipeline.score(text_test, y_test)
# + [markdown] deletable=true editable=true
# As you can see, this makes the code much shorter and easier to handle. Behind the scenes, exactly the same as above is happening. When calling fit on the pipeline, it will call fit on each step in turn.
#
# After the first step is fit, it will use the ``transform`` method of the first step to create a new representation.
# This will then be fed to the ``fit`` of the next step, and so on.
# Finally, on the last step, only ``fit`` is called.
#
# 
#
# If we call ``score``, only ``transform`` will be called on each step - this could be the test set after all! Then, on the last step, ``score`` is called with the new representation. The same goes for ``predict``.
# + [markdown] deletable=true editable=true
# Building pipelines not only simplifies the code, it is also important for model selection.
# Say we want to grid-search C to tune our Logistic Regression above.
#
# Let's say we do it like this:
# + deletable=true editable=true
# This illustrates a common mistake. Don't use this code!
from sklearn.model_selection import GridSearchCV
vectorizer = TfidfVectorizer()
vectorizer.fit(text_train)
X_train = vectorizer.transform(text_train)
X_test = vectorizer.transform(text_test)
clf = LogisticRegression()
grid = GridSearchCV(clf, param_grid={'C': [.1, 1, 10, 100]}, cv=5)
grid.fit(X_train, y_train)
# + [markdown] deletable=true editable=true
# ### 2.1.2 What did we do wrong?
# + [markdown] deletable=true editable=true
# Here, we did grid-search with cross-validation on ``X_train``. However, when applying ``TfidfVectorizer``, it saw all of the ``X_train``,
# not only the training folds! So it could use knowledge of the frequency of the words in the test-folds. This is called "contamination" of the test set, and leads to too optimistic estimates of generalization performance, or badly selected parameters.
# We can fix this with the pipeline, though:
# + deletable=true editable=true
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(),
LogisticRegression())
grid = GridSearchCV(pipeline,
param_grid={'logisticregression__C': [.1, 1, 10, 100]}, cv=5)
grid.fit(text_train, y_train)
grid.score(text_test, y_test)
# + [markdown] deletable=true editable=true
# Note that we need to tell the pipeline where at which step we wanted to set the parameter ``C``.
# We can do this using the special ``__`` syntax. The name before the ``__`` is simply the name of the class, the part after ``__`` is the parameter we want to set with grid-search.
# + [markdown] deletable=true editable=true
# <img src="figures/pipeline_cross_validation.svg" width="50%">
# + [markdown] deletable=true editable=true
# Another benefit of using pipelines is that we can now also search over parameters of the feature extraction with ``GridSearchCV``:
# + deletable=true editable=true
from sklearn.model_selection import GridSearchCV
pipeline = make_pipeline(TfidfVectorizer(), LogisticRegression())
params = {'logisticregression__C': [.1, 1, 10, 100],
"tfidfvectorizer__ngram_range": [(1, 1), (1, 2), (2, 2)]}
grid = GridSearchCV(pipeline, param_grid=params, cv=5)
grid.fit(text_train, y_train)
print(grid.best_params_)
grid.score(text_test, y_test)
# + [markdown] deletable=true editable=true
# <div class="alert alert-success">
# <b>EXERCISE</b>:
# <ul>
# <li>
# Create a pipeline out of a StandardScaler and Ridge regression and apply it to the Boston housing dataset (load using ``sklearn.datasets.load_boston``). Try adding the ``sklearn.preprocessing.PolynomialFeatures`` transformer as a second preprocessing step, and grid-search the degree of the polynomials (try 1, 2 and 3).
# </li>
# </ul>
# </div>
# + deletable=true editable=true
# # %load solutions/15A_ridge_grid.py
| notebooks/15.Pipelining_Estimators.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (statmath)
# language: python
# name: statmath
# ---
# +
import numpy as np
from sklearn.preprocessing import OneHotEncoder
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Dense, Input, Dropout, Conv2D, MaxPooling2D, Flatten, Reshape, GaussianNoise
from tensorflow.keras.constraints import MinMaxNorm
# -
# First download synth and svhn datasets:
# - synth: http://yaroslav.ganin.net/ (SynNumbers)
# - svhn: http://ufldl.stanford.edu/housenumbers/
#
# Extract then the train part of the two datasets.
# +
path_to_digits = "../datasets/digits/"
Xs = np.load(path_to_digits + "synth_X.npy")
ys = np.load(path_to_digits + "synth_y.npy")
Xt = np.load(path_to_digits + "svhn_X.npy")
yt = np.load(path_to_digits + "svhn_y.npy")
one = OneHotEncoder(sparse=False)
ys_cat = one.fit_transform(ys.reshape(-1, 1))
# +
def get_encoder(input_shape):
inputs = Input(input_shape)
modeled = Conv2D(32, 5, activation='relu')(inputs)
modeled = MaxPooling2D(2, 2)(modeled)
modeled = Conv2D(48, 5, activation='relu')(modeled)
modeled = MaxPooling2D(2, 2)(modeled)
modeled = Flatten()(modeled)
model = Model(inputs, modeled)
model.compile(optimizer="adam", loss='mse')
return model
def get_task(input_shape, output_shape=(10,), activation="softmax", C=1.):
inputs = Input(input_shape)
modeled = Dense(100, activation='relu',
kernel_constraint=MinMaxNorm(0, C),
bias_constraint=MinMaxNorm(0, C))(inputs)
modeled = Dense(100, activation='relu',
kernel_constraint=MinMaxNorm(0, C),
bias_constraint=MinMaxNorm(0, C))(modeled)
modeled = Dense(np.prod(output_shape), activation=activation)(modeled)
model = Model(inputs, modeled)
model.compile(optimizer="adam", loss='mse')
return model
# -
def get_discriminator(input_shape, C=1.):
inputs = Input(input_shape)
modeled = Dense(100, activation='relu',
kernel_constraint=MinMaxNorm(0, C),
bias_constraint=MinMaxNorm(0, C))(inputs)
modeled = Dense(100, activation='relu',
kernel_constraint=MinMaxNorm(0, C),
bias_constraint=MinMaxNorm(0, C))(modeled)
modeled = Dense(1, activation="sigmoid")(modeled)
model = Model(inputs, modeled)
model.compile(optimizer=Adam(0.001), loss='binary_crossentropy', metrics=["accuracy"])
return model
# +
inputs = Input((28, 28, 1))
encoder = get_encoder((28, 28, 1))
task = get_task(encoder.output_shape[1:])
encoded = encoder(inputs)
tasked = task(encoded)
model = Model(inputs, tasked)
model.compile(loss="categorical_crossentropy", optimizer=Adam(0.001), metrics=["accuracy"])
model.summary()
# -
np.random.seed(0)
tf.random.set_seed(0)
model.fit(Xs[:,:,:,np.newaxis], ys_cat, batch_size=128, epochs=30)
# +
X = np.concatenate((Xs, Xt))
y_lab = np.concatenate((np.zeros(len(Xs)), np.ones(len(Xt))))
X = encoder.predict(X[:,:,:,np.newaxis])
# -
discriminator = get_discriminator(X.shape[1:])
np.random.seed(0)
tf.random.set_seed(0)
discriminator.fit(X, y_lab, batch_size=128, epochs=10)
# +
path_to_models = "../datasets/models_digits/"
encoder.save(path_to_models + "encoder.h5")
task.save(path_to_models + "task.h5")
discriminator.save(path_to_models + "discriminator.h5")
# +
from adapt.feature_based import DANN
dann = DANN(get_encoder=get_encoder, get_task=get_task, get_discriminator=get_discriminator,
lambdap=0.1, loss="categorical_crossentropy", optimizer=Adam(0.001), metrics=["accuracy"])
# +
X = np.concatenate((Xs, Xt))
y = np.concatenate((ys, yt))
src_index = np.array(range(len(Xs)))
tgt_index = np.array(range(len(Xs), len(X)))
X = X.reshape(-1, 28, 28, 1)
y = tf.keras.utils.to_categorical(y, num_classes=10)
# -
np.random.seed(0)
tf.random.set_seed(0)
dann.fit(X, y, src_index, tgt_index, batch_size=128, epochs=30)
# +
path_to_models = "../datasets/models_digits/"
dann.encoder_.save(path_to_models + "dann_encoder.h5")
dann.task_.save(path_to_models + "dann_task.h5")
dann.discriminator_.save(path_to_models + "dann_discriminator.h5")
| notebooks/Digits_preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Draw sample MNIST images from dataset
# Demonstrates how to sample and plot MNIST digits using Keras API
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from keras.datasets import mnist
import matplotlib.pyplot as plt
# load dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# count the number of unique train labels
unique, counts = np.unique(y_train, return_counts=True)
print("Train labels: ", dict(zip(unique, counts)))
# count the number of unique test labels
unique, counts = np.unique(y_test, return_counts=True)
print("Test labels: ", dict(zip(unique, counts)))
# sample 25 mnist digits from train dataset
indexes = np.random.randint(0, x_train.shape[0], size=25)
images = x_train[indexes]
labels = y_train[indexes]
# plot the 25 mnist digits
plt.figure(figsize=(5,5))
for i in range(len(indexes)):
plt.subplot(5, 5, i + 1)
image = images[i]
plt.imshow(image, cmap='gray')
plt.axis('off')
# plt.savefig("mnist-samples.png")
plt.show()
plt.close('all')
# -
| keras/mlp/mnist-sampler.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import scipy.sparse
import json
import string
import pymorphy2
import gc
import gensim.models.keyedvectors as word2vec
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LinearRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from fse.models import SIF
from tqdm import tqdm_notebook
from multiprocessing import Pool, cpu_count
# -
path = '../../data/external/ruwikiruscorpora_upos_skipgram_300_2_2019/model.bin'
w2v_model = word2vec.KeyedVectors.load_word2vec_format(path, binary=True)
# +
punctuation = string.punctuation + '«»\n--––'
mapping = str.maketrans(punctuation, ' ' * len(punctuation))
ma = pymorphy2.MorphAnalyzer()
def normalize_text(s):
return " ".join(
[ma.normal_forms(word)[0] for word in s.translate(mapping).lower().split()]
)
def normalize_line(line):
item = json.loads(line)
item['content'] = normalize_text(item['content'])
item['title'] = normalize_text(item['title'])
if isinstance(item['image'], float):
item['image'] = np.full((96,),0)
else:
item['image'] = np.array(item['image'])
return item
# -
items = pd.read_csv('../../data/processed/processed_items.csv', index_col='itemId')
# +
# with open('items.json') as items_json:
# with Pool(cpu_count()) as pool:
# items_json_list = list(pool.imap(normalize_line, items_json))
# items = pd.DataFrame(items_json_list)
# items.set_index('itemId')
items.head()
# +
import nltk
nltk.download('stopwords')
#--------#
from nltk.corpus import stopwords
# -
items['title'] = items['title'].str.split()
# items['content'] = items['content'].str.split()
titles = list(items['title'].values)
# +
from pymystem3 import Mystem
conversion_table = {
'A': 'ADJ',
'ADV': 'ADV',
'ADVPRO': 'ADV',
'ANUM': 'ADJ',
'APRO': 'DET',
'COM': 'ADJ',
'CONJ': 'SCONJ',
'INTJ': 'INTJ',
'NONLEX': 'X',
'NUM': 'NUM',
'PART': 'PART',
'PR': 'ADP',
'S': 'NOUN',
'SPRO': 'PRON',
'UNKN': 'X',
'V': 'VERB'
}
m = Mystem()
def tag(word='пожар'):
processed = m.analyze(word)[0]
if 'analysis' not in processed or not processed["analysis"]:
return None
lemma = processed["analysis"][0]["lex"].lower().strip()
pos = processed["analysis"][0]["gr"].split(',')[0]
pos = pos.split('=')[0].strip()
pos = conversion_table.get(pos)
tagged = lemma + '_' + pos
return tagged
# -
russian_stopwords = set(stopwords.words("russian"))
from collections import defaultdict
# +
sif = defaultdict(int)
total_words = 0
for title in tqdm_notebook(titles):
if isinstance(title, float):
continue
for word in title:
tagged = tag(word)
total_words += 1
if tagged not in w2v_model or word in russian_stopwords:
continue
else:
tagged_id = w2v_model.wv.vocab[tagged].index
sif[tagged_id] += 1
sif = {word_id: num_occur / total_words for word_id, num_occur in sif.items()}
# -
gc.collect()
len(sif)
def sif_embeddings(sentences, model, alpha=1e-3):
""" Precomputes the indices of the sentences and uses the numpy indexing
to directly multiply and sum the vectors
"""
vlookup = model.wv.vocab
vectors = model.wv
output = []
for s in tqdm_notebook(sentences):
if isinstance(s, float):
output.append(np.zeros((300,)))
continue
# Pre-compute sentence indices
idx = [w2v_model.wv.vocab[tag(w)].index for w in s if tag(w) in w2v_model.wv.vocab]
# Note: vectors.sif is a pre-computed numpy array containing the weights for all the word-vectors.
weights = np.array([sif.get(word_id, 0) for word_id in idx])
v = weights @ w2v_model.wv.vectors[idx]
words_num = len(idx)
words_num -= np.sum(weights == 0)
if words_num:
v /= words_num
else:
v *= 0
output.append(v)
return np.vstack(output).astype(np.float32)
# +
title_embs = sif_embeddings(titles, w2v_model)
items_num = items.shape[0]
del titles, items, sif, w2v_model
gc.collect()
# -
title_embs = np.load('title_embeddings.np.npy')
title_embs.shape
title_embs_w2v = np.concatenate((title_embs, np.zeros((1, 300))))
np.save('title_embeddings_w2v', title_embs_w2v)
item_features = scipy.sparse.hstack((scipy.sparse.eye(items_num),
scipy.sparse.csr_matrix(title_embs)),
format='csr')
# +
data = []
row = []
col = []
train_lines = sum(1 for line in open('train.json','r'))
with open('train.json') as train_file:
for i, line in enumerate(tqdm_notebook(train_file, total=train_lines)):
json_line = json.loads(line)
for item, rating in json_line['trainRatings'].items():
data.append(2 * int(rating) - 1)
row.append(i)
col.append(int(item))
train_int = scipy.sparse.coo_matrix((data, (row, col)))
del data, row, col
gc.collect()
# -
scipy.sparse.save_npz('item_features_embedding.npz', item_features)
item_features = scipy.sparse.load_npz("item_features_embedding.npz")
item_features.shape
import lightfm
model = lightfm.LightFM(no_components=64, loss='logistic', learning_schedule='adadelta', random_state=42)
model.fit(train_int, epochs=7, num_threads=cpu_count(), item_features=item_features, verbose=True)
sample = pd.read_csv('random_benchmark.csv')
sample['pred'] = model.predict(
sample.userId.values,
sample.itemId.values,
item_features=item_features,
num_threads=cpu_count(),
)
sample.sort_values(['userId', 'pred'], ascending=[True, False], inplace=True)
sample.drop(columns=['pred'], inplace=True)
sample.to_csv('lightfm_title_embedding_log.csv', index=False)
# !kaggle competitions submit -c 2018-hse-ml-competition-04 -f lightfm_title_embedding_log.csv -m "Title embedding log loss 5 epochs no_components=64"
| notebooks/hse_iad_competition/straight_to_the_top_sif.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pipelines
# Pipelines are an integral part of creme. We encourage their usage and apply them in many of their examples.
#
# The `compose.Pipeline` contains all the logic for building and applying pipelines. A pipeline is essentially a list of estimators that are applied in sequence. The only requirement is that the first `n - 1` steps be transformers. The last step can be a regressor, a classifier, a clusterer, a transformer, etc. Here is an example:
# +
from creme import compose
from creme import linear_model
from creme import preprocessing
model = compose.Pipeline(
preprocessing.StandardScaler(),
preprocessing.PolynomialExtender(),
linear_model.LinearRegression()
)
# -
# You can also use the `|` operator, as so:
model = (
preprocessing.StandardScaler() |
preprocessing.PolynomialExtender() |
linear_model.LinearRegression()
)
# Or, equally:
model = preprocessing.StandardScaler()
model |= preprocessing.PolynomialExtender()
model |= linear_model.LinearRegression()
# A pipeline has a `draw` method that can be used to visualize it:
model.draw()
# `compose.Pipeline` inherits from `base.Estimator`, which means that it has a `fit_one` method. You would expect `fit_one` to update each estimator, but **that's not actually what happens**. Instead, the transformers are updated when `predict_one` (or `predict_proba_one` for that matter) is called. Indeed, in online machine learning, we can update the unsupervised parts of our model when a sample arrives. We don't have to wait for the ground truth to arrive in order to update unsupervised estimators that don't depend on it. In other words, in a pipeline, `fit_one` updates the supervised parts, whilst `predict_one` updates the unsupervised parts. It's important to be aware of this behavior, as it is quite different to what is done in other libraries that rely on batch machine learning.
#
# Here is a small example to illustrate the previous point:
# +
from creme import datasets
dataset = datasets.TrumpApproval()
x, y = next(iter(dataset))
x, y
# -
# Let us call `predict_one`, which will update each transformer, but won't update the linear regression.
model.predict_one(x)
# The prediction is nil because each weight of the linear regression is equal to 0.
model['StandardScaler'].means
# As we can see, the means of each feature have been updated, even though we called `predict_one` and not `fit_one`.
#
# Note that if you call `transform_one` with a pipeline who's last step is not a transformer, then the output from the last transformer (which is thus the penultimate step) will be returned:
model.transform_one(x)
# In many cases, you might want to connect a step to multiple steps. For instance, you might to extract different kinds of features from a single input. An elegant way to do this is to use a `compose.TransformerUnion`. Essentially, the latter is a list of transformers who's results will be merged into a single `dict` when `transform_one` is called. As an example let's say that we want to apply a `preprocessing.RBFSampler` as well as the `preprocessing.PolynomialExtender`. This may be done as so:
# +
model = (
preprocessing.StandardScaler() |
(preprocessing.PolynomialExtender() + preprocessing.RBFSampler()) |
linear_model.LinearRegression()
)
model.draw()
# -
# Note that the `+` symbol acts as a shorthand notation for creating a `compose.TransformerUnion`, which means that we could have declared the above pipeline as so:
model = (
preprocessing.StandardScaler() |
compose.TransformerUnion(
preprocessing.PolynomialExtender(),
preprocessing.RBFSampler()
) |
linear_model.LinearRegression()
)
# Pipelines provide the benefit of removing a lot of cruft by taking care of tedious details for you. They also enable to clearly define what steps your model is made of. Finally, having your model in a single object means that you can move it around more easily. Note that you can include user-defined functions in a pipeline by using a `compose.FuncTransformer`.
| docs/user-guide/pipelines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from __future__ import absolute_import, division, print_function
import sys
sys.path.append("TensorFlow20_ResNet")
import tensorflow as tf
from tensorflow.keras import Model
from models.resnet import resnet_18, resnet_34, resnet_50, resnet_101, resnet_152
import config
from prepare_data import generate_datasets
import math
from tqdm import tqdm
from IPython.display import clear_output
import seaborn as sns
from sklearn.neighbors import NearestNeighbors
import numpy as np
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras import layers
from tensorflow.keras import activations
# new_model = Model(inputs=model.layers[0].input, outputs=model.layers[-2].output)
model = resnet_18()
model.build_graph((32, 32, 32, 3))
model.layers[-1].output
# +
# """
# preparing backbone network
# """
# resnet50 = tf.keras.applications.ResNet50(
# include_top=True,
# weights=None,
# input_tensor=None,
# input_shape=(32, 32, 3),
# pooling=None,
# classes=512
# )
# x = tf.keras.layers.Dense(128, name='embeddings')(resnet50.layers[-2].output)
# backbone_network = Model(inputs=resnet50.input, outputs=x)
# backbone_network.save("backbone_network_empty.h5")
# +
# model = resnet_18(output_shape=y_train.shape[1], output_activation="softmax")
# # model.build(input_shape=(None, 32, 32, 3))
# model.build_graph((32, 32, 32, 3))
# model.compile(optimizer="sgd", loss='categorical_crossentropy')
# model.summary()
# -
# <h3>Preparing CIFAR Dataset</h3>
# +
from tensorflow.keras.datasets import cifar100, cifar10
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import losses
import scipy
import numpy as np
from sklearn.preprocessing import LabelEncoder
import tensorflow as tf
# Model configuration
img_width, img_height, img_num_channels = 32, 32, 3
no_epochs = 100
validation_split = 0.2
verbosity = 1
# Load CIFAR-10 data
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
y_train_orig = y_train.flatten().copy()
y_test_orig = y_test.flatten().copy()
X_train.shape
input_train = X_train
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Determine shape of the data
input_shape = (img_width, img_height, img_num_channels)
# Parse numbers as floats
input_train = X_train .astype('float32')
input_test = X_test.astype('float32')
# Normalize data
X_train = (X_train / 255).astype(np.float32)
X_test = (X_test / 255).astype(np.float32)
# labels for Discriminative Pretext Task
y_train = np.arange(X_train.shape[0])
# y_train = to_categorical(y_train)
y_test = np.arange(X_test.shape[0])
# y_test = to_categorical(y_test)
# backbone_network = tf.keras.models.load_model("backbone_network_empty.h5")
# x = layers.Activation(activations.relu, name="relu")(backbone_network.output)
# x = layers.Dense(y_train.shape[0], name="output_layer", activation="softmax")(x)
# param_classifier = Model(inputs=backbone_network.input, outputs=x)
# model = param_classifier
# # param_classifier.summary()
# +
# # model = resnet_18(output_shape=y_train.shape[1], output_activation="softmax")
# # # model.build(input_shape=(None, 32, 32, 3))
# # model.build_graph((32, 32, 32, 3))
# # model.compile(optimizer="sgd", loss='categorical_crossentropy')
# # model.summary()
# -
# <h3>Minimizing the Discriminatory Loss</h3>
# +
# model.load_weights('../../models/discriminative_pretext_model.100.h5')
# train=True
# model.load_weights("models/discriminative_pretext_model.200.h5")
# my_callbacks = [
# # tf.keras.callbacks.EarlyStopping(patience=10),
# tf.keras.callbacks.ModelCheckpoint(filepath='models/discriminative_pretext_model.{epoch:02d}.h5'),
# tf.keras.callbacks.TensorBoard(log_dir='.\\logs', histogram_freq=1, profile_batch = 100000000),
# ]
# +
# model = tf.keras.models.load_model("../../empty_model.h5")
# optimizer = tf.keras.optimizers.SGD(learning_rate=0.03)
# model.compile(optimizer=optimizer,
# loss=tf.keras.losses.sparse_categorical_crossentropy) # , metrics=['accuracy']
# history = model.fit(X_train, y_train, epochs=500, batch_size=128) # callbacks=my_callbacks,
# -
# <h3>Fitting the model using GradientTape</h3>
# +
# layer_outputs = [layer.output for layer in model.layers]
# layer_models = [tf.keras.Model(inputs=model.input, outputs=output) for output in layer_outputs]
# layer_embs = [layer_model(images[0:1]) for layer_model in layer_models]
# layer_embs[-50].numpy()
# +
def shuffle_dataset(X_train, y_train):
assert (X_train.shape[0] == y_train.shape[0]), "X and y shapes are not equal"
idxes = np.arange(X_train.shape[0])
np.random.shuffle(idxes)
return X_train[idxes], y_train[idxes]
# model.load_weights('../../models/discriminative_pretext_model.100.h5')
# +
# y_train = np.expand_dims(np.argmax(y_train, axis=1), axis=1)
# images.shape, labels.shape, embs.shape
# +
# # https://github.com/tensorflow/tensorflow/issues/28901
# labels.shape, embs.shape
# +
# model = tf.keras.applications.ResNet50(
# include_top=True,
# weights=None,
# input_tensor=None,
# input_shape=(32, 32, 3),
# pooling=None,
# classes=X_train.shape[0]
# )
# -
backbone_network = tf.keras.models.load_model("../../backbone_network_empty.h5")
x = layers.Activation(activations.relu, name="relu")(backbone_network.output)
x = layers.Dense(y_train.shape[0], name="output_layer", activation="softmax")(x)
param_classifier = Model(inputs=backbone_network.input, outputs=[x, backbone_network.output])
model = param_classifier
# +
epochs = 100
m = X_train.shape[0]
batch_size = 128
# model = tf.keras.models.load_model("../../empty_model.h5")
optimizer = tf.keras.optimizers.SGD(learning_rate=0.03)
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
# tape.watch(model.trainable_weights)
embs = model(images, training=True)[0]
loss_value = tf.reduce_mean(tf.keras.losses.sparse_categorical_crossentropy(labels, embs))
# loss_value = tf.keras.losses.sparse_categorical_crossentropy(labels, embs)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
return loss_value
for e in range(epochs):
e+=1
b = 0
batches = range(0, m, batch_size)
batch_losses = list()
print('epoch: {:02d}/{:02d}'.format(e, epochs), end="\r")
X_train, y_train = shuffle_dataset(X_train, y_train)
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(100000).batch(batch_size, drop_remainder=True)
for i, (images, labels) in enumerate(train_ds):
loss_value = train_step(images, labels)
loss_value_np = loss_value.numpy().mean()
batch_losses.append(loss_value_np)
print('{:05d}/{:05d} - loss: {:.05f}'.format(i, m//128, loss_value_np), end="\r")
b+=1
print("getting kmeans accuracy", end="\r")
[nn_acc, knn_acc, kmeans_acc] = get_nearest_neighbor_accuracies(model, X_train, X_test, y_train_orig, y_test_orig)
print("kmeans accuracy done", end="\r")
# print('epoch {:02d} ({:05d}/{:05d}) - loss: {:.05f}'.format(e, idx+batch_size, m, train_loss.result()))
print('epoch {:02d} ({:05d}/{:05d}) - loss: {:.05f} - nn_acc: {:.03f} - knn_acc: {:.03f} - kmeans_acc: {:.03f}'.format(e, i, m//128, np.mean(batch_losses), nn_acc, knn_acc, kmeans_acc))
# +
# new_layer_outputs = [layer.output for layer in model.layers]
# new_layer_models = [tf.keras.Model(inputs=model.input, outputs=output) for output in layer_outputs]
# new_layer_embs = [layer_model(images[0:1]) for layer_model in layer_models]
# new_layer_embs[-50].numpy()
# -
# <h3>NN / KNN / KMeans Accuracy</h3>
# +
from sklearn.cluster import KMeans
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
def get_embs(model, X_train, bsize=1000, verbose=0):
"""
Inputs:
model
X_train
bsize: batch_size to process in one go
"""
for idx in np.arange(0, X_train.shape[0], bsize):
if idx == 0:
embs = model(X_train[idx:idx+bsize])[1].numpy()
else:
embs = np.vstack([embs, model(X_train[idx:idx+bsize])[1].numpy()])
return embs
def get_knn_accuracy(train_embs, test_embs, y_train_orig, y_test_orig):
knn = KNeighborsClassifier(n_neighbors=5, algorithm='auto', metric='euclidean', n_jobs=-1)
# cosine_similarity_tf, scipy.spatial.distance.cosine
knn.fit(train_embs, y_train_orig)
predictions = knn.predict(test_embs)
acc_score = accuracy_score(y_test_orig, predictions)
return acc_score
def get_nn_accuracy(test_embs, y_test_orig):
nbrs = NearestNeighbors(n_neighbors=2, algorithm='auto', n_jobs=7, metric="euclidean").fit(test_embs)
distances, indices = nbrs.kneighbors(test_embs)
a = y_test_orig[indices]
def f(x): # finding cluster accuracy
return ((x[0] == x[1:]).sum()/(x.shape[0]-1))==1.0
cluster_accuracy = np.apply_along_axis(f, 1, a)
cluster_accuracy = cluster_accuracy.sum()/cluster_accuracy.shape[0]
return cluster_accuracy
def kmeans_accuracy_step(predictions, y_test_orig):
accuracies = list()
for unique in np.unique(predictions):
idxes = np.where(predictions == unique)[0]
actual_classes = y_test_orig[idxes]
mode_class_count = scipy.stats.mode(actual_classes)[1][0]
mode_class = scipy.stats.mode(actual_classes)[0][0]
accuracy = np.round(mode_class_count/actual_classes.shape[0], 3)
accuracies.append(accuracy)
mean_accuracy = np.round(np.mean(accuracies), 4)
return mean_accuracy
def get_kmeans_accuracy(train_embs, test_embs, y_test_orig, verbose=0):
all_accuracies = list()
for i in range(2):
km = KMeans(n_clusters=10, n_jobs=-1)
km.fit(train_embs)
predictions = km.predict(test_embs)
accuracy_iteration = kmeans_accuracy_step(predictions, y_test_orig)
if verbose: print(i, ":", accuracy_iteration)
all_accuracies.append(accuracy_iteration)
kmeans_accuracy = np.mean(all_accuracies)
return kmeans_accuracy
def get_nearest_neighbor_accuracies(model, X_train, X_test, y_train_orig, y_test_orig, verbose=0):
train_embs = get_embs(model, X_train, bsize=512)
test_embs = get_embs(model, X_test, bsize=512)
# KNN
knn_accuracy = get_knn_accuracy(train_embs, test_embs, y_train_orig, y_test_orig)
# Kmeans
kmeans_accuracy = get_kmeans_accuracy(train_embs, test_embs, y_test_orig)
# Nearest Neighbors
nn_accuracy = get_nn_accuracy(test_embs, y_test_orig)
return [nn_accuracy, knn_accuracy, kmeans_accuracy]
get_nearest_neighbor_accuracies(model, X_train, X_test, y_train_orig, y_test_orig)
# -
# <h3>KNN Accuracy</h3>
# +
"""
Will use this function later
"""
@tf.function
def cosine_similarity_tf(a, b):
def c(a):
return tf.reduce_sum(tf.square(a))
a = tf.cast(tf.expand_dims(a, 1), tf.float64)
b = tf.cast(tf.expand_dims(b, 1), tf.float64)
numerator = tf.matmul(tf.transpose(a),b)
denominator = tf.math.sqrt(c(a))*tf.math.sqrt(c(b))
return tf.squeeze(numerator/denominator).numpy()
# a = [1,2,3,4]
# b = [1,2,3,5]
# ans = cosine_similarity_tf(a, b)
# -
# <h3>Finding Nearest Neighbor (NN) Unsupervised Accuracy</h3>
from sklearn.neighbors import NearestNeighbors
# <center><h1>Computing the Accuracy</h1></center>
# <img src="images/computing NCE Accuracy.png" alt="Computing the Accuracy">
train_embs = get_embs(model, X_train, bsize=1000)
test_embs = get_embs(model, X_test, bsize=1000)
# <h3>Training for minimizing the Rotation Loss</h3>
# <h3>Finding Nearest Neighbors</h3>
# X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
def find_neighbor_accuracy(embeddings, plot=False):
embeddings = embeddings.numpy()
nbrs = NearestNeighbors(n_neighbors=10, algorithm='ball_tree').fit(embeddings)
distances, indices = nbrs.kneighbors(embeddings)
accuracies = list()
for i in range(len(true_classes)):
true_class = true_classes[i]
predicted_classes = true_classes[indices[i]][1:]
accuracy = (predicted_classes==true_class).sum()/predicted_classes.shape[0]
accuracies.append(accuracy)
if plot:
sns.distplot(accuracies)
return accuracies
# +
# embeddings = embeddings.numpy()
nbrs = NearestNeighbors(n_neighbors=10, algorithm='ball_tree').fit(embeddings)
distances, indices = nbrs.kneighbors(embeddings)
accuracies = list()
for i in range(len(true_classes)):
true_class = true_classes[i]
predicted_classes = true_classes[indices[i]][1:]
accuracy = (predicted_classes==true_class).sum()/predicted_classes.shape[0]
accuracies.append(accuracy)
sns.distplot(accuracies)
# -
indices= indices[0:10]
plt.figure(figsize=(15, 3))
for i, image in enumerate(images.numpy()[indices[9]]):
# print(image.shape)
plt.subplot(1,indices.shape[1],i+1)
plt.axis('off')
plt.imshow(image)
plt.tight_layout()
| CIFAR 100/3 - Resnet 18 Classification with Discriminative Pretext Task.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evolution strategies on test functions for optimization
# ## Test functions for optimization
# +
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import cma
x = np.linspace(-10, 10, num=100)
y = np.linspace(-10, 10, num=100)
x, y = np.meshgrid(x, y)
f = cma.ff.rastrigin
z = np.reshape(f(np.stack([x, y], -1).reshape(-1, 2)), [100, 100])
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
c = ax.contourf(x, y, z, cmap='jet')
fig.colorbar(c)
# +
from time import perf_counter
from datetime import timedelta
from cma.fitness_transformations import Expensify
from concurrent.futures import ProcessPoolExecutor
from lagom.utils import CloudpickleWrapper
from lagom import Logger
from lagom import CMAES
from lagom import CEM
from openaies.openaies import OpenAIES
sns.set()
cma = CMAES([9.0]*100, 1.0, {'popsize': 32, 'seed': 1})
cem = CEM([9.0]*100, 1.0, {'popsize': 32, 'seed': 1, 'elite_ratio': 0.20, 'noise_scheduler_args': [1.0, 0.0, 500, 0]})
openaies = OpenAIES([9.0]*100, 1.0,
{'popsize': 32, 'seed': 1,
'sigma_scheduler_args': [1.0, 0.01, 450, 0],
'lr': 1e-1, 'lr_decay': 1.0, 'min_lr': 1e-6,
'antithetic': False, 'rank_transform': True})
for name, es in [('CMA', cma), ('CEM', cem), ('OpenAIES', openaies)]:
t = perf_counter()
with ProcessPoolExecutor(max_workers=80) as executor:
logger = Logger()
#g = Expensify(f, time=0.01)
g = f
g = CloudpickleWrapper(g) # extremely useful for parallel, avoid getting stuck sometimes
for generation in range(1000):
solutions = es.ask()
function_values = list(executor.map(g, solutions, chunksize=16))
es.tell(solutions, function_values)
if generation == 0 or (generation+1)%100 == 0:
print(f'Generation # {generation+1}: {es.result.fbest}')
logger('generation', generation)
logger('fbest', es.result.fbest)
print(f'\nTotal time: {timedelta(seconds=round(perf_counter() - t))}')
ax = sns.lineplot(logger.logs['generation'], logger.logs['fbest'], label=name)
ax.set_title('Rastrigin function - 100 dim')
# -
| baselines/bb_functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # Question 2
# + [markdown] tags=[]
# ## Part 1: Working with dictionaries
# -
# Creating the first dictionary and filling it with values:
# + tags=[]
d1 = {}
# -
for i in range(1, 1000):
d1[f"key_{i}"] = i
# Saving the dictionary to json file:
import json
with open('result.json', 'w') as f:
json.dump(d1, f)
# Reading the second dictionary from json file:
# + tags=[]
with open('result.json') as f:
d2 = json.load(f)
# + jupyter={"outputs_hidden": true} tags=[]
d2
# -
# ## Part 2: Calling a function from another notebook
# We can use the magic command **run** to run the whole notebook here:
# %run question2_2.ipynb
ping()
# ## Part 3: Printing a list of where the ith element is the list of odd numbers lower than i
# The first solution is kind of a trick! I'm iterating over the numbers of range 0-100 and making a list from 1 to i with step size of 2:
# + jupyter={"outputs_hidden": true} tags=[]
print(*[[j for j in range(1, i, 2)] for i in range(100)], sep='\n')
# -
# We can also do the following:
# + jupyter={"outputs_hidden": true} tags=[]
print(*[[j for j in range(1, i) if j % 2 != 0] for i in range(100)], sep='\n')
# -
# ## Part 4: Printing all tuples (i, j) where i is not divisible by j
# + jupyter={"outputs_hidden": true} tags=[]
print(*[(i, j) for j in range(1, 1000) for i in range(1, 1000) if i % j != 0], sep='\n')
# -
# ## Part 5
# + [markdown] jupyter={"outputs_hidden": true} tags=[]
# I get the length of the list of numbers which are 3k+1 and are divisible by 5 or 7 and are not divisable by 35:
# -
len([i for i in range(100, 1000, 3) if ((i % 5 == 0) or (i % 7 == 0)) and i % 35 != 0])
# ## Part 6: Sum of prime numbers lower than 1000 in one line
# I've provided 3 different solutions for fun :D
sum([i for i in range(2, 1000) if sum([i % j == 0 for j in range(2, i)]) == 0])
sum([item for item in range(2, 1000) if len([i for i in range(2,item) if item % i == 0])==0])
sum([x for x in range(2, 1000) if not 0 in map(lambda z: x % z, range(2,x))])
| Homeworks/01. Introduction to Spark/01. Intro to Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.2 64-bit (''py38'': conda)'
# name: python382jvsc74a57bd0dba2eb6709c9760ece0c88a47ed7987433aa2131181da98756b93d9d7ffe864e
# ---
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from xgboost import XGBClassifier
from xgboost import plot_importance
import warnings
warnings.filterwarnings('ignore')
sns.set()
# ## Read Data
# +
data_file = Path.cwd().parent / 'data' / 'kaggle' / 'base-race-2-percent-dataset-unique-CVIDs.csv'
df = pd.read_csv(data_file, index_col='Match_ID')
df = df[df.columns[1:]]
df.columns = (
'cv_id',
'experience',
'french_level',
'status',
'cv_received',
'cv_validated',
'cv_approved',
'black',
'match_status'
)
df.index.names = ['match_id']
df['cv_received'] = pd.to_datetime(df.cv_received)
df['cv_validated'] = pd.to_datetime(df.cv_validated)
df['cv_approved'] = pd.to_datetime(df.cv_approved)
df.sample(1)
# -
# ## EDA
status_go_match = [
'A01 - Offer Preparation Started',
'A02 - Offer Sent to Candidate',
'A03 - Offer accepted',
'A03 - Offer declined',
'A03 - Process Cancelled',
'A04 - Hiring Request Started',
'A04a – Hiring Started / Collecting signatures',
'A04b – Hiring Started / Contract signed',
'A05 - Onboarding',
'A06 - Offboarding',
'B01 - Subcontracting Started',
'B02 - Subcontracting Signed',
'D01 – Resignation',
'T01- Terminated',
'Candidate validated'
]
df_go = df[df['status'].isin(status_go_match)]
no_go_pct = (1 - df_go.shape[0] / df.shape[0]) * 100
print("Input data has only {}/{} ({}%) accepted candidates.".format(
df_go.shape[0],
df.shape[0],
round(df_go.shape[0] / df.shape[0] * 100, 2)
))
print("Since the data is so skewed, we will not use accuracy as a performance measure")
# ## Feature Engineering
# ### Add Computed features
df['response_time'] = (df['cv_validated'] - df['cv_received']).dt.days
df['decision_time'] = (df['cv_approved'] - df['cv_validated']).dt.days
df.sample(1)
# ### Add Mapped Status
# Boolean _Go_ -> 1 or _No Go_ -> 0
status_map = {
'Rejected': 0,
'CV refused': 0,
'A02 - Offer Sent to Candidate': 1,
'Candidate refused': 0,
'A03 - Offer declined': 1,
'D01 – Resignation': 1,
'A05 - Onboarding': 1,
'Candidate dropped out': 0,
'CV dropped out': 0,
'T01- Terminated': 1,
'A03 - Process Cancelled': 1,
'Dropped out': 0,
'Approved': 0,
'CV sent to France': 0,
'Matched': 0,
'Candidate validated': 1,
'A01 - Offer Preparation Started': 1,
'A04b – Hiring Started / Contract signed': 1,
'A03 - Offer accepted': 1,
'CV approved': 0,
'A04 - Hiring Request Started': 1,
'Sent to Client': 0
}
df['status_mapped'] = df.status.map(status_map).astype(int)
df.sample(1)
# ### Remove Outliers
# +
rt_outliers = df.index[df['response_time'] < 0]
dt_outliers = df.index[df['decision_time'] < 0]
df.loc[rt_outliers, 'response_time'] = df.loc[rt_outliers, 'response_time'].fillna(df.response_time.mean())
df.loc[dt_outliers, 'decision_time'] = df.loc[dt_outliers, 'decision_time'].fillna(df.decision_time.mean())
# -
# ### Augmenting
df['french_level'] = df['french_level'].fillna('0')
df = df[df['experience'].notnull()]
df.skew() # <- TODO address high skew
ct = pd.crosstab(df['response_time'], df['status_mapped'])
ct.columns = ['Positive Candidate', 'Negative Candidate']
ct.head()
# ### Reduce skew
# +
from scipy.stats import norm, skew
plt.title('Before transformation')
rt = df.response_time.dropna() + 1
sns.distplot(df.response_time)
plt.figure()
plt.title('After Transformation')
sns.distplot(rt.apply(np.log), fit=norm)
# +
from scipy.stats import norm, skew
from scipy.special import boxcox1p
plt.title('Before transformation')
rt = df.decision_time.dropna() + 1
sns.distplot(df.decision_time)
plt.figure()
plt.title('After Transformation')
sns.distplot(rt.apply(np.log), fit=norm)
# +
# df = df[['status_mapped', 'french_level', 'experience', 'response_time', 'decision_time']]
# -
df.head()
# ### Label encoding
# Helps us predict on certain categories of data
# +
# Split decision times into groups of 20 day periods
decision_time_splits = np.ceil(df['decision_time'].max() / 20).astype(int)
decision_map = pd.concat(
pd.Series(str(i + 1), index=range(i * 20, 20 + i * 20))
for i in range(decision_time_splits)
)
df['response_time'] = df['response_time'].map(decision_map)
df['decision_time'] = df['decision_time'].map(decision_map)
# Replace decision and reponse times by their log values
df['decision_time'] = df['decision_time'].astype(float).dropna().apply(np.log)
df['response_time'] = df['response_time'].astype(float).dropna().apply(np.log)
# Rename target/dependent variable
df.rename(index=str, columns={'status_mapped': 'y'}, inplace=True)
# -
# Encode (replace unique values by integers) experience and french level
# +
le = LabelEncoder()
le.fit(df['experience'].unique())
df['experience'] = le.transform(df['experience'])
le.fit(df['french_level'].unique())
df['french_level'] = le.transform(df['french_level'])
df['french_level'].fillna('0', inplace=True)
le.fit(df['black'].unique())
df['black'] = le.transform(df['black'])
# -
# Remove rows with null values
# +
response_time_nulls = df.response_time[df.response_time.isnull()].index
decision_time_nulls = df.decision_time[df.decision_time.isnull()].index
french_level_nulls = df.french_level[df.french_level.isnull()].index
indices_to_drop = response_time_nulls.union(decision_time_nulls).union(french_level_nulls)
df.drop(indices_to_drop, inplace=True)
# -
# ### Prepare for one hot encoding
# +
experience_unique = df.experience.unique()
french_level_unique = df.french_level.unique()
black_unique = df.black.unique()
experience_unique, french_level_unique, black_unique
# -
experience_map = dict((e, int(bool(e > max(experience_unique) / 2))) for e in experience_unique)
french_level_map = dict((f, int(bool(f + 1 > max(french_level_unique) / 2))) for f in french_level_unique)
sorted(experience_map.items()), sorted(french_level_map.items())
df['experience'] = df['experience'].map(experience_map)
df['french_level'] = df['french_level'].map(experience_map)
# ### One hot encoding
for column in ('french_level', 'experience', 'black'):
dummies = pd.get_dummies(df[column], prefix=f'_{column}')
df = df.join(dummies, how='outer').drop([column], axis=1)
df.columns.to_list()
df = df[[
'y',
'response_time',
'decision_time',
'_french_level_0.0',
'_french_level_1.0',
'_experience_0',
'_experience_1',
'_black_0',
'_black_1'
]]
y = df['y']
X = df.drop(['y', '_experience_0', '_french_level_0.0', '_black_0'], axis=1)
# ### Hold out & Cross validation
n_hold_out = 100
X, y = X[:-n_hold_out], y[:-n_hold_out]
X_cv, y_cv = X[:n_hold_out], y[:n_hold_out]
sns.heatmap(df.corr(), cmap='coolwarm', annot=True, fmt='.1f')
# ## Split, train and predict on test set
# +
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
model = XGBClassifier()
model.fit(X_train, y_train)
# -
y_pred = model.predict(X_test)
from pprint import pprint
pprint(classification_report(y_test, y_pred, output_dict=True))
plot_importance(model)
df_shap = pd.DataFrame(shap_values, columns=X.columns)
| xgboost/xgboost-kaggle-race.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %load_ext lab_black
import pandas as pd
import os
from joblib import dump, load
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import r2_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import Lars
import matplotlib
# -
# ## IMPORT DATA
df_wne = pd.read_csv(os.path.join("Data", "wne_test_train.csv"))
df_wne["FAT_CARB"] = df_wne["FAT"] / df_wne["CARBS"]
df_wne["FAT_PRO"] = df_wne["FAT"] / df_wne["PROTEIN"]
df_wne["PROTEIN_NEW"] = df_wne["PROTEIN_main"] * df_wne["PROTEIN"]
df_wne.head(10)
df_wne_holdout = pd.read_csv(os.path.join("Data", "wn_hldot.csv"))
df_wne_holdout["FAT_CARB"] = df_wne_holdout["FAT"] / df_wne_holdout["CARBS"]
df_wne_holdout["FAT_PRO"] = df_wne_holdout["FAT"] / df_wne_holdout["PROTEIN"]
df_wne_holdout["PROTEIN_NEW"] = (
df_wne_holdout["PROTEIN_main"] * df_wne_holdout["PROTEIN"]
)
df_wne_holdout
X = df_wne[
["WINE_CLASS", "PROTEIN", "PROTEIN_main", "CALORIES", "SODIUM", "FAT_PRO"]
].values
# X = x.astype(int)
y = df_wne["WINE"].values
# y = Y.astype(int)
# ## TEST TRAIN SPLIT
# +
kf = KFold(n_splits=20, shuffle=True, random_state=1234)
kf.get_n_splits(X)
print(kf)
for train_index, test_index in kf.split(X):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# X_train, X_test, y_train, y_test = train_test_split(
# X, y, test_size=0.175, random_state=7456
# )
# -
# ## Random forest, GaussianNB, Linear Regression, Weighted RF, NN
# +
# clf = MLPClassifier(
# solver="sgd", alpha=1e-2, max_iter=500, hidden_layer_sizes=(175,), random_state=1
# )
# clf.fit(X_train, y_train)
# y_pred = clf.predict(X_test)
# print(classification_report(y_test, y_pred))
# lrr = Ridge()
# lrr.fit(X_train, y_train)
# y_pred = lrr.predict(X_test)
# print(r2_score(y_test, y_pred))
# lr_wne = LinearRegression()
# lr_wne.fit(X_train, y_train)
# y_pred = lr_wne.predict(X_test)
# print(r2_score(y_test, y_pred))
# lar_wne = Lars()
# lar_wne.fit(X_train, y_train)
# y_pred = lar_wne.predict(X_test)
# print(r2_score(y_test, y_pred))
# nbm_wne = MultinomialNB()
# nbm_wne.fit(X_train, y_train)
# y_pred = nbm_wne.predict(X_test)
# print(classification_report(y_test, y_pred))
# nb_wne = GaussianNB()
# nb_wne.fit(X_train, y_train)
# y_pred = nb_wne.predict(X_test)
# print(classification_report(y_test, y_pred))
wrf_wne = RandomForestClassifier(class_weight="balanced")
wrf_wne.fit(X_train, y_train)
y_pred = wrf_wne.predict(X_test)
print(classification_report(y_test, y_pred, zero_division="warn"))
# rf_wne = RandomForestClassifier()
# rf_wne.fit(X_train, y_train)
# y_pred = rf_wne.predict(X_test)
# print(classification_report(y_test, y_pred))
# +
# dump(lrr, "lrr.joblib")
# dump(lr_wne, "lr_wne.joblib")
# dump(lar_wne, "lar_wne.joblib")
# dump(nb_wne, "nb_wne.joblib")
# dump(nbm_wne, "nbm_wne.joblib")
dump(wrf_wne, "wrf_wne.joblib")
# dump(rf_wne, "rf_wne.joblib")
# -
# ## Run Model
X = df_wne_holdout[
["WINE_CLASS", "PROTEIN", "PROTEIN_main", "CALORIES", "SODIUM", "FAT_PRO"]
].values
Y = df_wne_holdout["WINE"].values
y = Y.astype(int)
# +
# reg = load("nb_wne.joblib")
# reg = load("nb.joblib")
# reg = load("rf_wne.joblib")
reg = load("wrf_wne.joblib")
# -
prediction = reg.predict(X)
print(prediction)
# +
# df_pred = pd.DataFrame(prediction)
# df_pred.head(15)
# df_pred.rename(columns={0: "Predicted Bo_Ft"})
df_wne_holdout["WINE_PRED"] = prediction.astype(int)
df_wne_holdout[["WINE", "WINE_PRED"]]
# -
ax1 = df_wne_holdout.plot.scatter(x="WINE", y="WINE_PRED", c="DarkBlue")
| .ipynb_checkpoints/wne_pred-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Natural Language Processing
#
# Natural Language Processing can be divided into two more specific subsets as shown in the figure below. During this part of the course we explain how to deal with most topics related to NLP.
#
# In the first part, we explain some basic methods to handle strings and documents like:
# - using regular expression for string processing,
# - use NLTK to generate tokens from documents,
# - tagging text with NLTK,
# - use unicode and normalize strings,
# - explain what lemmatization and stemming is,
# - analyze text with noun chunks and named entity recognition.
# The second part is dedicated to text understanding. We use machine learning methods and explain such topics like:
# - Bag of Words,
# - Tf-idf,
# - word2vec,
# - intent recognition.
# The third part consist of methods used for text generation like
# - n-gram model,
# - seq2seq model.
# Last part is about quality metrics used in NLP methods.
# ## NLP process
#
# Based on **[1]**, we have a process that is known to solve most of the NLP problems.
#
# 1. Gather your data
# 2. Clean yout data
# 3. Find a good representation
# 4. Classification
# 5. Inspection
# 6. Accounting for vocabulary structure
# 7. Leveraging semantics
# 8. Leveraging syntax using end-to-end approaches
#
# In this notebook we focus mostly on the second and third part of the process.
# ## Tools
#
# There are plenty of tools that can be used for NLP. A few most popular are:
# - Regular expression,
# - NLTK [www.nltk.org](NLTK),
# - TextBlob [http://textblob.readthedocs.io/en/dev/](TextBlob),
# - SpaCy [spacy.io](SpaCy),
# - coreNLP [https://stanfordnlp.github.io/CoreNLP/](coreNLP).
# ### References
#
# **[1]** Natural Language Processing with Python, <NAME>, <NAME>, <NAME>. O'Reilly 2009
| ML1/nlp/101_NLP_Introduction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A model of the intergenerational movement of traits for populations reproducing under polygenic inheritance
# ## Abstract
# This work describes and presents the results from a mathematical model based on the standard understanding of polygenic inheritance. When applied to intergenerational movement between quintiles, the model obtained an r<sup>2</sup> of 0.92 and 0.93 with the Brookings Institution measures of intergenerational education and income mobility, respectively. The model better predicted measures of education and income mobility than those measures predicted one another: r<sup>2</sup> = 0.84. One original question motivated the creation of the model: consider the tallest one fifth of trees in a forest. Under polygenic inheritance, are a majority of them the offspring of the previous generation's tallest one fifth of trees or are a majority of them the offspring of the previous generation's shorter four fifths of trees? While tall trees are more likely to have tall offspring, there are far more average/short trees than tall trees. It is not immediately clear whether or at what point the effect of a higher probability of tall offspring outweighs the effect of a far greater number of offspring. A simulation of the model showed that a minority (43%) of trees above the 80th percentile are the offspring of the previous generation’s tallest one fifth. The 72nd percentile is the equilibrium point at which the proportion is 50%. That is, of the trees above the 72nd percentile, half are the offspring of parents also above the 72nd percentile and half are the offspring of parents below the 72nd percentile.
# ## Introduction
# In biology, a phenotypic trait is a measurable trait that results from the expression of genes. As an example, the phenotype of hair color is the observed color while the genotype is the underlying genes that determine the color. The phenotypic traits Mendel studied in pea plants were unique in that they were determined single genes. However, it is often the case that phenotypic traits are determined by many genes. - sometimes hundreds or thousands. These traits are termed <a href="https://en.wikipedia.org/wiki/Polygene" target="_blank">polygenic traits</a> (many genes).
#
# In general, the population distribution for the phenotype of a polygenic trait falls into what is called a [normal or gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution). This phenomenon has been observed by plotting the frequency of phenotypes for a polygenic trait and finding a close approximation to a normal distribution. The greater the number of genes influencing the trait, the closer the approximation. This is thought to occur as a result of the many possible allelic combinations among individual genes. In this model, genes code for alleles with additive effects, either + or - on a measurement of the trait.
#
# One example of a polygenic trait is height: there are about 700 genes that influence human height, each of which has a very small effect, either positive or negative. The resultant population distribution of height is therefore Gaussian. The polygenic inheritance in this case is analogous to flipping 700 coins and recording the number of heads minus the number of tails. If one were to do this many times, one would obtain a normal distribution: Often obtaining a more or less equal number of heads and tails and occasionally obtaining a far greater number of heads than tails or vice versa. In the case of height, the trait is univariate, meaning that it can be measured by only one value. However, traits are sometimes multivariate, and though the work presented here does not discuss such cases, future work likely will.
#
# As the phenotypes of a population fall under a normal distribution, their frequencies can be described by the following probability density function.
#
# \begin{equation*}
# \LARGE f(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x-\mu}{\sigma})^2}
# \end{equation*}
#
# The parameter μ is the mean or expectation of the distribution (and also its median and mode); σ is its standard deviation. If the population is the parent-generation, then the distribution is made up of all the parent phenotypic values x<sub>p</sub> and their corresponding frequencies f(x<sub>p</sub>) as shown below.
#
# \begin{equation*}
# \LARGE f(x_p)=\frac{1}{\sigma_{pd}\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x_p-\mu_{pd}}{\sigma_{pd}})^2}
# \end{equation*}
#
# The parameters μ<sub>pd</sub> and σ<sub>pd</sub> are the mean and standard deviation of the parent-generation population.
#
# It is generally understood that polygenic traits are heritable, which means that a correlation exists between the phenotypes of parents and offspring. For example, as measured by [Luo et al](https://www.nature.com/articles/pr1998502), the correlation coefficient - also termed the heritability value - between parent and offspring for human height is 0.55-0.60. Under this model of regression toward the mean, parents whose height is 1 standard deviation above the mean have offspring whose height is on average 0.55-0.60 standard deviations above the mean. The model presented in their paper is based on the linear regression model, in which there is a straight regression line that provides the 'best' fit for the data points. Its equation is shown below.
#
# \begin{equation*}
# \LARGE \hat y = \hat \alpha + \hat{\beta} x
# \end{equation*}
#
# In the case of polygenic inheritance, x represents the phenotypic value of a set of parents and $\hat y$ represents the predicted offspring's phenotypic value. In future equations presented here, $\bar{x}_{o}$ will be used in place of $\hat y$. The parameters α and β are found by minimizing the sum of squared residuals. It can be [shown](https://en.wikipedia.org/wiki/Regression_toward_the_mean#Definition_for_simple_linear_regression_of_data_points) that if the mean and standard deviation of x and y are equal then the expected y can be given by the following equation.
#
#
# \begin{equation*}
# \LARGE \hat y = \bar{x} + r(x - \bar{x})
# \end{equation*}
#
# Where r is given by the following equation:
#
# \begin{equation*}
# \LARGE r = \frac{Cov[x,y]}{\sqrt{Var[x] Var[y]}}
# \end{equation*}
#
#
# When applied to polygenic inheritance, the expected phenotypic value for the offspring of the parents at the phenotypic value x<sub>p</sub> is given by Luo et al. in the following equation [1].
#
# \begin{equation*}
# \LARGE \bar{x}_{o} = \mu_{pd} + r(x_p - \mu_{pd})
# \end{equation*}
#
# The parameter μ<sub>pd</sub> is the mean of the parent population distribution and the parameter r is the correlation coefficient or heritability value between parent and offspring. This equation represents the current understanding of polygenic inheritance. While it gives the mean phenotypic value of the offspring of parents at x<sub>p</sub> it fails to describe their general distribution. In this work, it is suggested that the offspring of members of the parent population with phenotypic value x<sub>p</sub> are normally distributed with a mean at $\bar{x}_o$. The offspring distributions from each x<sub>p</sub> in the parent distribution sum to form the total offspring distribution. By keeping track of the contribution of sections of the parent distribution to sections of the total offspring distribution, it is possible to make meaningful statements about the intergenerational movement of traits for reproducing populations in nature and society.
# ## One Offspring Distribution
# This work proposes that the frequency of the phenotypic values for the offspring of parents at x<sub>p</sub> is normally distributed about $\bar{x}_o$. The distribution of the phenotypic values of the offspring of parents at x<sub>p</sub> is then given by the following equation:
#
# \begin{equation*}
# \LARGE g(x)=f(x_p)\frac{1}{r_s\sigma_{pd}\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x-\bar{x}_o}{r_s\,\sigma_{pd}})^2}
# \end{equation*}
#
# The offspring distribution is a centered at $\bar{x}_o$. Its standard deviation is the parent generation population standard deviation σ<sub>pd</sub> scaled by r<sub>s</sub> and each of its values are scaled by the frequency of the parent phenotypic value f(x<sub>p</sub>).
#
# If r<sub>s</sub>=1, then the variance of the offspring from parents at x<sub>p</sub> is equal to the variance of the entire parent-generation population. While there are not yet literature measurements of r<sub>s</sub>, it would seem to be more likely that the variance is less than, and almost certainly not greater than that of the entire parent population. In that case, r<sub>s</sub> is more likely less than 1 as opposed to equal to or greater than 1. In a more complicated scenario not considered here, r<sub>s</sub> varies with x<sub>p</sub>.
#
# Note that the phenotypic value x<sub>p</sub> corresponds to the z-score z<sub>p</sub> - relative to the parent-generation population. A complete description of the one offspring distribution can be made with the following statement and two equations:
#
# The distribution of the offspring of parents at x<sub>p</sub> is a normal distribution centered at z-score z<sub>o</sub> (relative to the parent-generation population), with standard deviation σ<sub>o</sub>, and proportional to the value at f(x<sub>p</sub>).
#
# \begin{equation*}
# \LARGE z_o=r\,z_p
# \end{equation*}
#
# \begin{equation*}
# \LARGE \sigma_o=r_s\,\sigma_{pd}
# \end{equation*}
#
# The statement and two equations do not supply any additional information about the one offspring distribution. Instead, they provide an alternative way of describing the one offspring distribution that more clearly indicates the role of r and r<sub>s</sub>.
# ## Total Offspring Distribution
# While g(x) describes the distribution of offspring from only one x<sub>p</sub>, a function is needed to describe the distribution of the entire offspring-generation population. This distribution is made up of the combined one-offspring-distributions from each x<sub>p</sub> in the parent-generation population. The frequencies of the phenotypes of the offspring-generation population can then be described by the following probability density function.
#
# \begin{equation*}
# \LARGE G(x)=\int_{-\infty}^{\infty} g(x) \, dx_p
# \end{equation*}
#
# The frequency of each phenotypic value x in the offspring-generation population is obtained by summing the frequency at x for each one-offspring-distribution g(x).
#
# It is important to remark that this distribution G(x) appears by all measures to be a normal distribution. This lends credence to the model as the offspring-generation population should indeed be normally distributed, and in most cases have a mean and standard deviation equal to those of the parent-generation distribution. The mean of the total offspring distribution is always equal the mean of the (total) parent distribution. On the other hand, the standard deviation of the total offspring distribution varies proportionally with both r and r<sub>s</sub>.
# ## Answering the Motivating Quesiton
# At this point, it would seem to be possible to answer the motivating question: Are a majority of the tallest one fifth of trees in a forest the offspring of the previous generation's tallest one fifth? It is important to recognize that the area under a specific section of a population distribution bounded by phenotypic values represents the size of the population with those phenotypic values. In the case of the tallest one fifth of trees in a forest, the section is bound by k<sub>2</sub> and ∞, where k<sub>2</sub> represents the phenotypic value (height) at the 80th percentile of the population distribution. For a given phenotypic value x<sub>p</sub> in the parent-generation population, it is necessary to find the size of its offspring population that is located in the top quintile. This is achieved by integrating x<sub>p</sub>'s one offspring distribution from k<sub>2</sub> to ∞:
#
# \begin{equation*}
# \LARGE f(x_p)\frac{1}{\sigma_o\sqrt{2\pi}}\int_{k_2}^{\infty} e^{-\frac{1}{2}(\frac{x-\bar{x}_{\small o}}{\sigma_{\small o}})^2} dx
# \end{equation*}
#
# The integral provides the amount of offspring with a phenotypic value above k<sub>2</sub> from parents with the phenotypic value x<sub>p</sub> .
#
# To find what proportion of the offspring in the top fifth of the offspring-generation population are from parents in the top fifth of the parent-generation population, it is necessary to divide the amount of top fifth offspring from only those x<sub>p</sub> in the top fifth of the parent population by the amount of top fifth offspring from all x<sub>p</sub> in the parent population. This fraction gives the proportion of top fifth offspring from top fifth parents, the answer to the motivating question. The x<sub>p</sub> in the top fifth of the parent distribution are bounded by k<sub>1</sub> and ∞, where k<sub>1</sub> represents the height at the 80th percentile of the parent distribution. The following expression gives the amount of top fifth offspring from the top fifth parents.
#
# \begin{equation*} \LARGE
# \int_{k_1}^{\infty}f(x_p)\frac{1}{\sigma_o\sqrt{2\pi}}\int_{k_2}^{\infty} e^{-\frac{1}{2}(\frac{x-\bar{x}_{\small o}}{\sigma_{\small o}})^2}dx\,dx_p
# \end{equation*}
#
# This expression is then divided by the amount of top fifth offspring from all parents, which is a similar expression. The only difference is that the outer integral ranges over all members of the parent distribution (-∞ to +∞). The inner integral can be simplified with the [cumulative distribution function](https://en.wikipedia.org/wiki/Cumulative_distribution_function).
# ## Intergenerational Movement and Two Types of Questions
# The calculations involved in answering the motivating question can be generalized to answer two types of questions.
#
# The first type of question is to ask what proportion of an arbitrary section of the total offspring distribution is from another arbitrary section of the parent distribution. For example, one could ask what proportion of the offspring-generation population with z-scores of between 1 and 1.5 are the offspring of members of the parent-generation population with z-scores of between -0.5 and 0. The motivating question was of this type, as it asked what proportion of a top section of the total offspring distribution was from the same top section of the parent distribution.
#
# The second type of question is to ask what proportion of the offspring of parents in an arbitrary section of the parent distribution end up in another arbitrary section of the total offpsring distribuiton. For example, one could ask what proportion of the offspring from parents with z-scores of between -2 and -1, have z-scores of between 1 and 2.
#
# In answering these questions, it is helpful to define a Φ term as follows.
#
# \begin{equation*} \LARGE
# \Phi(k_1,k_2,k_3,k_4) \equiv \int_{k_1}^{k_2}f(x_p)\frac{1}{\sigma_o\sqrt{2\pi}}\int_{k_3}^{k_4} e^{-\frac{1}{2}(\frac{x-\bar{x}_{\small o}}{\sigma_{\small o}})^2}dx\,dx_p
# \end{equation*}
#
# This term gives the size of the population with phenotypic values between k<sub>3</sub> and k<sub>4</sub> that are the offspring of members of the parent generation with phenotypic values between k<sub>1</sub> and k<sub>2</sub>. In other words, it provides the amount of a specific section of the offspring-generation population from a specific section of the parent-generation population.
#
# #### Proportion Attributable
# To answer the first type of question, it is necessary to find the ratio of the Φ term for the specific section of the parent and offspring-generation population divided by the Φ term for the specific section of the offspring-generation population, but the entire parent-generation population. This gives the proportion of the arbitrary section of the total offspring distribuiton that is the offspring of or 'attributable to' the arbitrary section of the parent distribution. The proportion is equivalent to the probability that a given member of the arbitrary section of the total offspring distribuiton is the offspring of a member of the arbitrary section of the parent distribution. The proportion attributable is given by the following equation.
#
# \begin{equation*} \LARGE
# P_a(k_1,k_2,k_3,k_4) = \frac{\Phi(k_1,k_2,k_3,k_4)}{\Phi(-\infty,\infty,k_3,k_4)}
# \end{equation*}
#
# The parameters k<sub>3</sub> and k<sub>4</sub> give the bounds of the arbitrary section of the total offspring distribution and the parameters k<sub>1</sub> and k<sub>2</sub> give the bounds of the arbitrary section of the parent distribution.
# #### Proportion Destined
# To answer the second type of question, it is necessary to find the ratio of the Φ term for the specific section of the parent and offspring-generation population divided by the Φ term for the specific section of the parent-generation population, but the entire offspring-generation population. This gives the proportion of the offspring from the arbitrary section of the parent distribuiton that end up in or are 'destined to' the arbitrary section of the total offspring distribution. The proportion is equivalent to the probability that a given offspring of a parent in the arbitrary section of the parent distribuiton is a member of the arbitrary section of the total offspring distribution. The proportion destined is given by the following equation.
#
#
# \begin{equation*} \LARGE
# P_d(k_1,k_2,k_3,k_4) = \frac{\Phi(k_1,k_2,k_3,k_4)}{\Phi(k_1,k_2,-\infty,\infty)}
# \end{equation*}
#
# The parameters k<sub>3</sub> and k<sub>4</sub> give the bounds of the arbitrary section of the total offspring distribution and the parameters k<sub>1</sub> and k<sub>2</sub> give the bounds of the arbitrary section of the parent distribution.
# ## Discussion
# While the equations in the model do not have closed form solutions, they can be simulated with code. As a result, the answers to the questions presented here are approximations as the simulations are limited by computational speed.
#
# To obtain values for intergenerational movement between quintiles, P<sub>d</sub> was obtained for each quintile of the parent and total offspring distributions. The P<sub>d</sub>'s were then compared to the measured values for education and income mobility provided by the Brookings Institution. If income and education are normally distributed in the population with regression towards the mean between parent and offspring, then a high correlation between the values provided by this model and those provided by the Brookings Institution might indicate that the equations presented here provide a good model of reproducing normal population distributions with regression towards the mean.
# ### References
# [1] https://www.nature.com/articles/pr1998502
#
#
#
#
#
# # Demonstration of Code
import imp
import tree_functions as tree
imp.reload(tree)
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
import time
# +
number_of_iterations = 500 # this number is related to the number of operations -
# if slow, then reduce. Multiples of 100 tend to work best.
z_range = 8
r = 0.5
r_s = 0.9
mean_gen = 0
sd_gen = 1
fig_size = (12, 8)
fig_size1 = (8, 5.333)
# -
# ## Parent Generation Population Distribution
# The proportion of the population with each phenotypic value is plotted for the parent generation population.
#
# Technical Note:
# - We use the classic mean of zero and standard deviation of 1, although the code handles any mean and sd.
n_iterations_dem = 100 # Make n_iterations small for demonstration purposes
parent_distribution_dem = tree.normal_distribution(n_iterations_dem, z_range, mean_gen, sd_gen)
parent_distribution = tree.normal_distribution(number_of_iterations, z_range, mean_gen, sd_gen)
plt.figure(figsize=fig_size1)
plt.title('Parent generation distribution')
tree.plot_distribution(parent_distribution_dem, label='parent population')
plt.savefig('images/hi.png')
# ## One Offspring Distribution
# #### The Role of the Mean Regression Coefficient (r)
# If r = 1, then parents at x<sub>p</sub> have offspring centered at x<sub>p</sub> as well. There is no regression toward the mean.
#
# \begin{equation*}
# \large \bar{x}_{o} = \mu_{pd} + 1(x_p - \mu_{pd}) = x_p
# \end{equation*}
parent_index = tree.z_score_to_index(z_score=1, number_of_steps=number_of_iterations, \
z_score_range=z_range)
single_offspring_distribution1 = \
tree.one_offspring_distribution(parent_distribution, parent_index, 1, r_s)
plt.figure(figsize=fig_size1)
plt.xlim(-4.5, 4.5)
plt.title('No regression to the mean. Perfect inheritance. r = 1')
tree.plot_distribution(parent_distribution_dem, label='parent population')
plt.axvline(x=1, linestyle='--', ymax=tree.f_norm(1,0,1)/0.41, label='parents at x_p')
tree.plot_distribution(single_offspring_distribution1, label='offspring of parents at x_p')
plt.legend()
plt.show()
# At the opposite extreme, if r = 0 then parents at x<sub>p</sub> have offspring centered at the mean of the entire parent generation population (μ<sub>pd</sub>). There is complete regression toward the mean.
#
# \begin{equation*}
# \large \bar{x}_{o} = \mu_{pd} + 0(x_p - \mu_{pd}) = \mu_{pd}
# \end{equation*}
single_offspring_distribution3 = tree.one_offspring_distribution(parent_distribution, \
parent_index, 0, r_s)
plt.figure(figsize=fig_size1)
plt.title('Complete regression to the mean. No inheritance. r = 0')
tree.plot_distribution(parent_distribution_dem, label='parent population')
plt.axvline(x=1, linestyle='--', ymax=tree.f_norm(1,0,1)/0.41, label='parents at x_p')
tree.plot_distribution(single_offspring_distribution3, label='offspring of parents at x_p')
plt.axvline(x=0, linestyle='--', label='x = 0', color='orange')
plt.xlim(-4.5, 4.5)
plt.legend()
plt.show()
# In reality, r ≈ 0.5, which means that parents at x<sub>p</sub> have offspring centered at the average of μ<sub>pd</sub> and x<sub>p</sub>, halfway between the mean of the entire parent generation population and the value of the parents. There is some regression toward the mean.
#
# \begin{equation*}
# \large \bar{x}_{o} = \mu_{pd} + 0.5(x_p - \mu_{pd}) = \frac{\mu_{pd} + x_p}{2}
# \end{equation*}
#
single_offspring_distribution2 = tree.one_offspring_distribution(parent_distribution, \
parent_index, 0.5, r_s)
plt.figure(figsize=fig_size1)
plt.title('True (some) regression to the mean. Inheritance. r = 0.5.')
tree.plot_distribution(parent_distribution_dem, label='parent population')
plt.axvline(x=1, linestyle='--', ymax=tree.f_norm(1,0,1)/0.41, label='parents at x_p')
tree.plot_distribution(single_offspring_distribution2, label='offspring of x_p')
plt.axvline(x=0.5, linestyle='--', label='x = 0.5', color='orange')
plt.xlim(-4.5, 4.5)
plt.legend()
plt.show()
# #### The Role of the Standard Deviation Regression Coefficient (r<sub>s</sub>)
# The one-offspring-distributons shown so far have used r<sub>s</sub> = 0.9. If however r<sub>s</sub> = 0.5, then the standard deviation of the offspring of parents at x<sub>p</sub> is one half the standard deviation of the entire parent generation population.
#
# \begin{equation*}
# \large \sigma_o=r_s\,\sigma_{pd} = 0.5\,\sigma_{pd}
# \end{equation*}
single_offspring_distribution4 = tree.one_offspring_distribution(parent_distribution, \
parent_index, 0.5, 0.5)
plt.figure(figsize=fig_size1)
tree.plot_distribution(parent_distribution_dem, label='parent population')
plt.axvline(x=1, linestyle='--', ymax=tree.f_norm(1,0,1)/0.41, label='parents at x_p')
tree.plot_distribution(single_offspring_distribution4, label='offspring of parents at x_p')
plt.axvline(x=0.5, linestyle='--', label='x = 0.5', color='orange')
plt.xlim(-4.5, 4.5)
plt.legend()
plt.show()
# As can be observed, the offspring of parents at x<sub>p</sub> are far less spread out than when r<sub>s</sub> was 0.9 in the plots before.
# ## Total Offspring Distribution
offspring_distributions_ = tree.offspring_distributions(parent_distribution_dem, r, r_s)
# Instead of showing only the one offspring distribution from parents at x<sub>p</sub> = 1, we can show all the one offspring distributions from each x<sub>p</sub> in the parent distribution.
plt.figure(figsize=fig_size1)
plt.xlim(-4.5, 4.5)
tree.plot_distribution(parent_distribution, label='parent population')
tree.plot_distributions(offspring_distributions_)
plt.legend()
plt.show()
total_offspring_distribution = \
tree.final_superimposed_distribution_all_area_adj(parent_distribution, r, r_s)
# The individual one offspring distributions shown above are combined by the following equation to form the total offspring distribution.
#
# \begin{equation*}
# \large G(x)=\int_{-\infty}^{\infty} g(x) \, dx_p
# \end{equation*}
plt.figure(figsize=fig_size1)
plt.xlim(-4.5, 4.5)
tree.plot_distribution(parent_distribution, label='parent population')
tree.plot_distribution(total_offspring_distribution, label='offspring-generation population')
plt.legend()
plt.show()
print(tree.st_dev_of_distribution(parent_distribution))
print(tree.st_dev_of_distribution(total_offspring_distribution))
1/1.029
# Technical Note:
# - The total offspring distribution shown above is normed to the area of the parent distribution, which basically means that the offspring-generation population size is set to be equal to the parent population size.
# ### If There Were No Regression Toward The Mean (r = 1)
# Make total_offspring_distribution1 into a parent generation
total_offspring_distribution1 = tree.final_superimposed_distribution_all_area_adj(parent_distribution, 1, r_s)
parent_distribution1 = tree.final_super_to_parent(total_offspring_distribution1)
total_offspring_distribution2 = tree.final_superimposed_distribution_all_area_adj(parent_distribution1, 1, r_s)
parent_distribution2 = tree.final_super_to_parent(total_offspring_distribution2)
total_offspring_distribution3 = tree.final_superimposed_distribution_all_area_adj(parent_distribution2, 1, r_s)
# With no regression toward the mean (r = 1), successive generations become increasingly spread out. This is why there must be regression (r ≈ 0.5) to maintain a stable population distribution.
#
# Blitzstein discusses this in [a talk](https://youtu.be/dzFf3r1yph8?t=728) in which he says: 'only after a few generations, one would see giant 10 foot people and little four inch people'. The model presented here demonstrates these same effects.
plt.figure(figsize=fig_size1)
plt.xlim(-4.5, 4.5)
tree.plot_distribution(parent_distribution) # plot the parent distribution
tree.plot_distribution(total_offspring_distribution1) # plot the total offspring distribution 1
tree.plot_distribution(total_offspring_distribution2) # plot the total offspring distribution 2
tree.plot_distribution(total_offspring_distribution3) # plot the total offspring distribution 3
plt.legend(labels=['1st (parent) generation', '2nd (offspring) generation', \
'3rd generation', '4th generation'])
plt.show()
# The standard deviation of each generation increases in a linear fashion from 1 to 2.3 over the course of four generations.
generations_list = [parent_distribution, total_offspring_distribution1, total_offspring_distribution2, total_offspring_distribution3]
plt.figure(figsize=fig_size1)
tree.plot_generations_sd(generations_list)
# ##### Note to self:
# Talk about if r_s = 0.5. Make that a section. Then, make a new section about how in nature the offspring distribution tends to have the same standard deviation as the parent population. Make plots of the error between the standard deviation when r is fixed at 0.5 and r_s ranges from 0.5 to 1.5 in increments of 0.1, use the graphing code as a template. Then do the same such that r_s is fixed at 0.9 and r ranges from 0.5 to 1.5. Then, make a 3d plot of error as r *and* r_s range from 0.5 to 1.5. Perhaps there's a local minimum somewhere or not? Perhaps eventually make the graph I was talking about with Aidan where you do r vs r_s such that the error is less than some threshold - such as the current error used in the notebook.
# ## Answering the Motivating Question
top_fifth_value_parent = tree.percentile_to_value(0.8, parent_distribution) # Get the value at the 80th percentile
top_fifth_value_offspring = tree.percentile_to_value(0.8, parent_distribution, total_offspring_distribution)
offspring_top_fifth = tree.final_superimposed_distribution(parent_distribution, r, r_s, above_k_v_o=top_fifth_value_offspring)
offspring_top_fifth_par_top_fifth = tree.final_superimposed_distribution(parent_distribution, r, r_s, above_k_v_p=top_fifth_value_parent, above_k_v_o=top_fifth_value_offspring)
offspring_top_fifth_par_bottom_four_fifth = tree.final_superimposed_distribution(parent_distribution, r, r_s, below_k_v_p=top_fifth_value_parent, above_k_v_o=top_fifth_value_offspring)
# In order to answer the motivating question, we need find whether a majority of the offspring-generation population above the 80th percentile can be attributed to members of the parent-generation population above the 80th percentile. In the plot below, this means finding out if the area under the green line is greater than the area under the red line. The answer to the motivating question is given by the following equation.
#
# \begin{equation*} \large
# P_a(k_1,\infty,k_2,\infty) = \frac{\Phi(k_1,\infty,k_2,\infty)}{\Phi(-\infty,\infty,k_2,\infty)}
# \end{equation*}
#
# The parameters k<sub>1</sub> and k<sub>2</sub> give the values for the 80th percentile of the parent and total offspring distributions, respectively.
plt.figure(figsize=fig_size1)
plt.xlim(-4.5, 4.5)
tree.plot_distribution(parent_distribution)
tree.plot_distribution(offspring_top_fifth)
tree.plot_distribution(offspring_top_fifth_par_top_fifth)
tree.plot_distribution(offspring_top_fifth_par_bottom_four_fifth)
plt.legend(labels=['parent distribution', 'offspring above 80%', 'offspring from parents above 80%', 'offspring from parents below 80%'], loc='upper right')
plt.show()
# This cell gives the P<sub>a</sub> (the probability that a tree in the top fifth of the offspring-generation population is from a parent in the top fifth of the parent-generation population).
tree.proportion_attributable_percentile(parent_distribution, r, r_s, above_k_p=0.8, above_k_o=0.8, offspring_distribution=total_offspring_distribution)
# As we can see, a minority (43%) of the tallest one fifth of trees are the offspring of the last generation's 'average' trees (shorter four fifths). We've answered the tree problem! (Most of the tallest one fifth in a population are from the last generation's bottom four fiths.)
# ### Equilibrium Point for the Motivating Question
tree_problem_range = tree.step_tree_question_z_score(parent_distribution, r, r_s, z_score_increment=0.125, z_score_bound=8)
# We've been using percentile, only because it's a more accesible way of explaining the problem than using standard deviation. However, it probably makes more mathematical sense to use standard deviation.
#
# This plot shows the answer to the tree question for all z-scores from -4 to 4. That is, for each z-score, it gives the percent of the population above the z-score that are offspring of parents below the z-score. A horizontal line is drawn when the proportion is 50%, and a vertical line is drawn to estimate what the z-score is at that equilibrium.
plt.figure(figsize=fig_size)
plt.xlim(-4.5, 4.5)
plt.scatter(np.arange(-4, 4.125, 0.125), tree_problem_range)
plt.axhline(y=0.5, linestyle='--')
plt.axvline(x=0.57, linestyle='--')
plt.show()
# There seems to be an equilibrium point at a z-score of about 0.57. That is, those above 0.57 seem to be equally the offspring of parents below 0.57 and parents above 0.57. Note that the z-score 0.57 is equivalent to the 72nd percentile.
st.norm.cdf(0.57)
eq = 0.57
offspring_top_equil = tree.final_superimposed_distribution(parent_distribution, r, r_s, above_k_v_o=eq)
offspring_top_par_top_equil = tree.final_superimposed_distribution(parent_distribution, r, r_s, above_k_v_p=eq, above_k_v_o=eq)
offspring_top_par_bottom_equil = tree.final_superimposed_distribution(parent_distribution, r, r_s, below_k_v_p=eq, above_k_v_o=eq)
# This plot shows a similar plot to the one shown before, although at the equilibrium 72nd percentile, rather than at the 80th percentile. From looking at the plot, it's somewhat believable that the area under the red line (the parents being below the equilibrium z-score) is equal to the area under the green line (the parents being above the equilibirum z-score).
plt.figure(figsize=fig_size)
plt.xlim(-4.5, 4.5)
tree.plot_distribution(parent_distribution) # plot the parent distribution
tree.plot_distribution(offspring_top_equil) # plot offspring above the equilibrium
tree.plot_distribution(offspring_top_par_top_equil) # plot offspring from parents above the equilibrium
tree.plot_distribution(offspring_top_par_bottom_equil) # plot offspring from parents below the equilibrium
plt.legend(labels=['parent distribution', 'offspring above equilibrium', 'offspring from parents above equilibrium', 'offspring from parents below equilibrium'])
plt.show()
eq = 0.57
offspring_top_equil = tree.final_superimposed_distribution(parent_distribution, r, r_s, above_k_v_o=eq)
offspring_par_top_equil = tree.final_superimposed_distribution(parent_distribution, r, r_s, above_k_v_p=eq)
offspring_par_bottom_equil = tree.final_superimposed_distribution(parent_distribution, r, r_s, below_k_v_p=eq)
# This is the same plot as before except now we can see the full distributions for all the offspring, the offspring from parents above the equilibrium z-score, and the offspring from parents below the equilibrium z-score. The equilibrium z-score is denoted with a blue dashed line.
plt.figure(figsize=fig_size)
plt.xlim(-4.5, 4.5)
tree.plot_distribution(parent_distribution) # plot the parent distribution
tree.plot_distribution(total_offspring_distribution) # plot the total offspring distribution
tree.plot_distribution(offspring_par_top_equil) # plot offspring from parents above equilibrium
tree.plot_distribution(offspring_par_bottom_equil) # plot the offspring from parents below equillibrium
plt.axvline(x=0.57, linestyle='--')
plt.legend(labels=['parent distribution', 'offspring distribution', 'offspring from parents above equilibrium', 'offspring from parents below equilibrium'])
plt.show()
# It's clear that while the offspring distribution from parents above the equilibrium is farther to the right, it is much smaller than the offspring distriubtion from parents below the equilibrium. Those two forces (size and probability) exactly balance each other out above the equilibrium point. One interesting thing to note is that the are two gaps between the green and red lines to the right of the equilibrium point: One in which the green line is below the red line and another in which the green line is above the red line. The area of those two gaps must be equal to each other.
# ## Intergenerational Mobility
# We've been talking only about the limited case in which we're comparing the amount of offspring from parents below a certain percentile/z-score to the total amount of offspring above that percentile/z-score. However, we can be much more general than that.
#
# For instance, we can answer questions such as: consider the offspring in some quintile of the offspring distribution, what percent are the offspring of the last generation's top quintile? What percent are the offspring of parents in the last generation's fourth quintile? Etc. I call this getting the step_proportion_attributable, because for each quintile in the offspring distribution we're getting the proportion that can be attributed to each quintile of the parent distribution.
#
#
# We can also answer a different but related question: consider the parents in some quintile of the parent distribution - for example the top quintile - what percent of their offspring end up in the top quintile (just like them)? What percent move down a bit to end up in the fourth quintile? What percent end up all the way over in the bottom quintile? I call this getting the step_proportion_destined, because for each quintile in the parent distribution we're getting the proportion of their offspring that are destined to end up in each of the five quintiles. The step_proportion_destined is a measure of intergenerational mobility.
#
# It turns out that when using percentiles as steps, such as asking this question for each quintile rather than for each z-score-range of 1 from -2 to 2, some simple math can show that the step_proportion_attributable and step_proportion_destined are exactly the same.
#
# If there were no regression toward the mean, we would expect that nearly all the offspring of parents in the top quintile would end up also in the top quintile. On the other hand, if there were complete regression toward the mean - with no correlation between parent and offspring, we would expect the offspring of parents in the top quintile to be evenly split amount all quintiles. The truth is in the middle of these two extremes.
# Make the number of iterations large to be accurate
n_iterations_large = 100 # reduce this if too slow # WAS 2000
parent_distribution_large = tree.normal_distribution(n_iterations_large, z_range, mean_gen, sd_gen)
# Here we calculate the step_proportion_destined for the five quintiles, which is equivalent to the step_proportion_attributable.
percent_step_five = 0.2
step_percentile = tree.step_proportion_destined_percentile(parent_distribution_large, \
r, r_s, percent_step_five)
# This plot shows, for each quintile of the parent distribution, the probability that their offspring end up in each of the five quintiles. The parent quintile is labeled at the bottom of the graph. The offspring quintiles are displayed with colors: the lightest blue corresponds to the probability that the offspring of those parents end up in the top quintile while the darkest blue corresponds to the probability that the offspring of those parents end up in the bottom quintile. For example: 5% of offspring of parents in the top quintile end up in the bottom quintile, while 44% stay in the top quintile - like their parents.
plt.figure(figsize=fig_size)
tree.bar_graph_step(step_percentile)
# As metioned before, the same plot can correctly be interpreted differently: The bottom quintiles represent quintiles of the offspring distribution and the colors represent the quintiles of the parents of those offspring. This would mean that 5% of those in the top quintile are the offspring of parents in the bottom quintile while 44% of those in the top quintile are the offspring of parents who were also in the top quintile.
#
# It's interesting to note that offspring of parents in the middle quintile have a roughly uniform chance of ending up in any of the five quintiles. While offspring of parents in the bottom quintile or top quintile are about 9 times more likely to end up in the same quintile as their parents than move to the opposite extreme quintile.
#
# We have the opportunity here to compare the intergenerational mobility of our simulated polygenic distribution to the measured, real-life intergenerational mobility of various things. While I wasn't able to find the intergenerational mobility of height, the brooking's institute provides measurments of the the intergenerational mobility of education and income.
#
# <img src="tree_source_images/27-Education-Mobility.png">
#
# <img src="tree_source_images/RelativeMob_Figure1.png">
# It's interesting to note the similarities between the values (and yes I took inspiration from their colors and formatting). Listing the values for each of the offspring and parent quintiles side by side yields r<sup>2</sup> = 0.92 between our simulated polygenic and the Brookings education measurements, and r<sup>2</sup> = 0.93 between our simulated polygenic and the Brookings income measurements. Interestingly, the simulated polygenic values are more correlated with both the education and income measurements than they are with each other: The Brookings education and income measurements compared to each other have r<sup>2</sup> = 0.84.
# Note that we can create this plot using any percentile of our choice, for example percentile steps of 33%. You can do this by running the below.
# +
# percent_step_three = 0.3333
# step_three_labels = ['Bottom Third', 'Middle Third', 'Top Third']
# step_percentile_3 = tree.step_proportion_destined_percentile(parent_distribution_large, r, r_s, percent_step_three)
# plt.figure(figsize=fig_size)
# tree.bar_graph_step(step_percentile_3, step_labels=step_three_labels)
# -
| archive/tree_project_explainer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# The basics
import numpy as np
import pandas as pd
# Sklearn
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
# Keras
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.text import one_hot, Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, Flatten, Dense, Softmax, LSTM
# utils
import os
# -
isear = pd.read_csv('../data/raw/isear.csv', sep='|', error_bad_lines=False, usecols=['Field1', 'SIT', 'EMOT'])
number_of_classes = len(isear.EMOT.unique())
maxlen = 1000
max_words = 10000
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(isear['SIT'])
sequences = tokenizer.texts_to_sequences(isear['SIT'])
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=maxlen, padding='post')
x_train, x_test, y_train, y_test = train_test_split(data, isear['EMOT'])
# +
glove_dir = '../data/external'
embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.50d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
# -
embeddings_index['king'] - embeddings_index['man'] + embeddings_index['woman'] - embeddings_index['queen']
# +
embedding_dim = 50 # if chaning this, update the file name above
embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
if i < max_words:
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
# -
# ## Model creation time
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(LSTM(128, return_sequences=True, dropout=0.2, recurrent_dropout=0.2))
model.add(Flatten())
model.add(Dense(number_of_classes + 1, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(x_train, to_categorical(y_train),
epochs=5,
batch_size=32,
validation_data=(x_test, to_categorical(y_test)))
y_pred = model.predict_classes(x_test)
y_pred
confusion_matrix(y_test, y_pred)
| notebooks/Isear-07-codefupanda-LSTM_Model-GloVE.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Filter
# criando uma função
def verifica_par(num):
if num % 2 == 0:
return True
else:
return False
# chamando a função que retorna um valor boolena True para número Par
# retorna False para número Impar
print(verifica_par(35))
lista = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]
# trouxe apenas os valores do resto da divisão por 2 for igual a 0
print(list(filter(verifica_par, lista)))
# usando expressão lambda
print(list(filter(lambda x: x % 2 == 0, lista)))
# retorno um número
print(list(filter(lambda num: num > 8, lista)))
| Cap04/filter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Mie Resonances
#
# +
# sphinx_gallery_thumbnail_path = '../images/Experiment_QscaVSWavelength.png'
def run(Plot, Save):
import numpy as np
from PyMieSim.Experiment import SphereSet, SourceSet, Setup
scatSet = SphereSet( Diameter = [200e-9, 150e-9],
Index = [3, 4],
nMedium = [1])
sourceSet = SourceSet( Wavelength = np.linspace(400e-9, 1000e-9, 500),
Polarization = [0],
Amplitude = [1] )
Experiment = Setup(ScattererSet = scatSet, SourceSet = sourceSet)
Data = Experiment.Get(Input=['Qsca', 'g'])
if Plot:
Data.Mean('Index').Plot(y='Qsca', x='Wavelength')
if Save:
from pathlib import Path
dir = f'docs/images/{Path(__file__).stem}'
Data.SaveFig(Directory=dir, y='Qsca', x='Wavelength')
if __name__ == '__main__':
run(Plot=True, Save=False)
| docs/source/Experiments/QscaVSWavelength.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from packages.nxt import *
brick = connectar(5)
Musica(brick)
Do()
Re()
Mi()
Fa()
Sol()
La()
Si()
Do(octava=1)
# 
# +
La(1/2), Si_bemol(1/2)
Do(octava=1), La()
Fa(), Si_bemol(1/2), La(1/2)
Sol(1/2), Fa(1/2), Sol(1/2), Sol(1/2)
La(1/2), Fa(1/2), La(1/2), Si_bemol(1/2)
Do(octava=1), La()
Fa(), Si_bemol(1/2), La(1/2)
Sol(1/2), Fa(1/2), Sol(1/2), La(1/2)
Fa()
# -
# 
# +
Sol(1/2), Do(1/2+1/4,octava=1), Do(1/4,octava=1), Do(1/2,octava=1), Do(1/2,octava=1)
Do(octava=1), Do(octava=1)
Sol(1/2), Sol(1/2), Sol(1/2), Sol(1/2)
Mi(octava=1), Do(1/2,octava=1), Sol(1/2)
Do(1/2+1/4,octava=1), Do(1/4,octava=1), Do(1/2,octava=1), Do(1/2,octava=1)
Do(octava=1), Do(octava=1)
Sol(1/2), Sol(1/2), La(1/2), Si(1/2)
Do(1/2,octava=1), Silenci(1/2), Sol(1/2), Sol(1/2)
Mi(octava=1), Re(octava=1)
Do(1/2,octava=1), Silenci(1/2), Do(1/2,octava=1), Do(1/2,octava=1)
Sol(1/2), Sol(1/2), La(1/2), Si(1/2)
Do(1,octava=1)
# -
desconnectar(brick)
| Music Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import scipy.optimize as opt
import pandas as pd
import os
try:
import cPickle as pickle
except ImportError:
import pickle
from . import wealth
from . import labor
from . import SS
from . import utils
def chi_n_func(s, a0, a1, a2, a3, a4):
chi_n = a0 + a1*s + a2*s**2 + a3*s**3 + a4*s**4
return chi_n
# +
a0 = 1
a1 = 0
a2 = 0
a3 = 0
a4 = 0
params_init = np.array([a0, a1, a2, a3, a4])
labor_data = np.array([167, 165, 165, 165, 165, 166, 165, 165, 164, 166, 164])
labor_moments = labor_data * 12 / (365 * 17.5)
data_moments = np.array(list(labor_moments.flatten()))
p.chi_n = chi_n_func(ages, a0, a1, a2, a3, a4)
ss_output = SS.run_SS(p, client)
model_moments = calc_moments(ss_output, p.omega_SS, p.lambdas, p.S, p.J)
# -
| ogusa/.ipynb_checkpoints/Chi_n-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div class="clearfix" style="padding: 10px; padding-left: 0px">
# <img src="../resources/img/softbutterfly-logo.png" class="pull-left" style="display: block; height: 40px; margin: 0;"><img src="../resources/img/jupyter-logo.png" class="pull-right" style="display: block; height: 20px; margin-top: 10px;">
# </div>
# <h1>
# Curso de Python Básico<br/>
# <small>Con aplicaciones en ciencias e ingeniería</small><br/>
# <small><small>Clase del 23 de septiembre del 2017</small></small><br/>
# </h1>
# <div style="text-align:center">
# <strong><NAME></strong><br/>
# <code><a href="https://github.com/zodiacfireworks" target="_blank">@zodiacfireworks</a></code><br/>
# <br/>
# <em>Universidad Nacional Mayor de San Marcos</em><br/>
# <span>Facultad de Ciencias Físicas</span><br>
# <br>
# <i>SoftButterfly</i><br>
# <span>Cofundador y Líder del Área de Desarrollo</span>
# </div>
# ## 1 Instrucciones básicas
# ### 1.1 Hola mundo
print('Hola mundo!')
# #### Versión con comentarios y la función `print`
# +
# Comentarios:
# 1. El comentario se marca con el simbolo `#`
# 2. La función `print` sirve para imprimir y
# mostrar la salida de los programar
#
# La sintaxis es:
# print('cadena de texto')
# Ejemplo: <NAME>
# >>> print('Hola mundo!')
#
print('Hola mundo!')
# -
# ### 1.2 ¡Qué el mundo repita lo que digo!
greetings = input('- ¡Hola humano!\n')
print('-', greetings, 'x2 :v')
# Comentarios:
# 1. La función `input` sirve para recibir
# los datos proporcinados por el usuario.
#
# La sintaxis es:
# input('prompt')
#
# 2. La valor retornado por la función `input`
# es la informaciòn ingresada por el usuario.
#
# bar = input('prompt')
#
# 3. La secuencia `\n` es representa un salto
# de línea, secuencias de ese tipo se
# denominan SECUENCIAS DE ESCAPE
#
greetings = input('- ¡Hola humano!\n')
print('-', greetings, 'x2 :v')
# ### 1.3 Un primer programa _útil_: El área del círculo
radius = input('Ingrese el radio [m]: ')
radius = float(radius)
area = 3.141592*radius*radius
print('El area es:', area, 'm^2')
# #### ¿Comentarios?
# +
# 1. Solicitamos el ingreso del radio en metros
radius = input('Ingrese el radio [m]: ')
# 2. El radio ingresado es interpretado como texto
# y con la función `float` lo convierte a número
# decimal.
radius = float(radius)
# 3. Empleamos la formula habitual de pi*r^2
area = 3.141592*radius*radius
# 4. Imprimimos el resultado :D
print('El area es:', area, 'm^2')
# -
# #### Nota adicional
# La función `float` permite convertir texto a números reales.
# Su sintaxis es
#
# ```
# float('nuero como texto')
# ```
# ## 2. Variables y tipos de datos
# ### 2.1. Variables
# Tal como lo hemos venido haciendo en ejemplos anteriores, para declarar variables en Python se emplea el operdor de asignación (`=`). Su sintaxis es:
# ```
# variable_name = variable_value
# ```
# #### Ejemplos
# +
# 1. La función print acepta multiples argumentos separados
# por comas y los imprime uno a continuación del otro.
# 2. La función `type` nos permite saber que tipo de dato
# es el que esta almacenado en nuestra varaible.
x = '<NAME>'
print(type(x), ':', x)
# +
x = 5 + 3j
print(type(x), ':', x)
# +
x = type
print(type(x), ':', x)
# +
x = print
print(type(x), ':', x)
# +
x = input
print(type(x), ':', x)
# -
# ### 2.2. Tipos de datos
# #### 2.2.1. Booleanos
# Los tipos boleanos (`bool`) sirven para almancenar el valor de verdad de las prosposiciones. Se represetan mendiante las palabras `True` y `False`. Haciendo uso de la función `bool` se puede convertir cualquier expresion que admita un valor de verdad en booleano.
# #### Ejemplos
# +
x = True
y = False
print(type(x), ',', type(y))
# +
# `is` es UN operador comparación (de identidad) (veremos más sobre el después en el curso)
# `not` es el operador lógico de negación
# `or` es el operador lógico de conjunción
This = 0
love = This
This is love, love is not True or False, love is love
# +
x = 5
x_bool = bool(0)
print(type(x), ':', x_bool)
# +
x = ''
x_bool = bool('')
print(type(x), ':', x_bool)
# -
# #### 2.2.2 Números enteros
# Una variable de tipo `integer` o entero sólo puede guardar números enteros. Las expresiones que se puedan representar como numeros enteros se pueden convertir en tipos integer mediante la función `int`.
# #### Ejemlos
# +
x = -5
print(type(x), ':', x)
# +
x = int(5.12)
print(type(x), ':', x)
# +
x = int('25')
print(type(x), ':', x)
# -
# #### 2.2.3 Números decimales
# Los numeros decimales se almacenan en variables de tipo `float`. Las expresiones que se puedan representar como numeros decimales se pueden convertir en tipos `float` mediante la función `float`.
# #### Ejemlos
# +
x = 3.141516
print(type(x), ':', x)
# +
x = float(5)
print(type(x), ':', x)
# +
x = float('2.718281828')
print(type(x), ':', x)
# +
# 1. Un número decimal siempre estará seguido de un `.`, así
# 5 es diferente de 5.0
# 2. Para obtener el nombre d eun tipo se usa la propiedad __name__
x = 5
y = float(5)
print(x, 'es', type(x).__name__, 'mientras que', y, 'es', type(y).__name__)
# -
# #### 2.2.4. Listas
# +
x = [1, 2, 3, 5, 'ups...']
print(type(x))
# +
# 1. Algunos ingredientes para hacer tallarínes con papa a la huancaina para 4 personas
# 2. Esta es la primera aparición del bucle `for`. En python la
# la iteración con el bucle for se hace por elementos. En este caso
# no usamos indices para obtener el elemento sino que lo empleamos
# directamente.
for el in ['1 kg de fideos', '4 tomates', '1 zanahoria grande', 'Hongos y laurel', '1/2 kg de papas', '2 Ajies amarillos']:
print(el)
# -
# #### 2.2.5. Tuplas
# +
x = (1, 2, 3, 5, 'ups...')
print(type(x))
# -
# El verdadero contrcutor de la tupla es la coma (`,`)
# +
y = 'only one',
print(type(y))
# -
# #### 2.2.6. Conjuntos
# +
x = {1, 2, 3, 4, 'okay'}
print(type(x))
# -
# Los elementos de un conjunto jamás se repiten
# +
x = {1, 2, 3, 4, 'okay'}
y = {1, 1, 2, 2, 3, 4, 'double one'}
print(x)
print(y)
# -
# #### 2.2.7. Diccionarios
# +
person = {
'Name': 'Johana',
'Lastname': 'Gonzales',
'Facultad': 'Sociales'
}
print(type(person))
# +
city = {
'Name': 'Lima',
'Foundation': {
'year': 1535,
'mes': 1,
'date': 18,
},
'Lat': 'S12°2\'35.45"',
'Lon': 'O77°1\'41.66"'
}
print(type(city))
# +
import json
response_text = """
{
"id": 1,
"name": "bulbasaur",
"base_experience": 64,
"height": 7,
"is_default": true,
"order": 1,
"weight": 69,
"abilities": [
{
"is_hidden": true,
"slot": 3,
"ability": {
"name": "chlorophyll",
"url": "http://pokeapi.co/api/v2/ability/34/"
}
}
],
"forms": [
{
"name": "bulbasaur",
"url": "http://pokeapi.co/api/v2/pokemon-form/1/"
}
],
"game_indices": [
{
"game_index": 1,
"version": {
"name": "white-2",
"url": "http://pokeapi.co/api/v2/version/22/"
}
}
],
"held_items": [],
"location_area_encounters": [],
"moves": [
{
"move": {
"name": "captivate",
"url": "http://pokeapi.co/api/v2/move/445/"
},
"version_group_details": [
{
"level_learned_at": 0,
"version_group": {
"name": "heartgold-soulsilver",
"url": "http://pokeapi.co/api/v2/version-group/10/"
},
"move_learn_method": {
"name": "machine",
"url": "http://pokeapi.co/api/v2/move-learn-method/4/"
}
},
{
"level_learned_at": 0,
"version_group": {
"name": "platinum",
"url": "http://pokeapi.co/api/v2/version-group/9/"
},
"move_learn_method": {
"name": "machine",
"url": "http://pokeapi.co/api/v2/move-learn-method/4/"
}
},
{
"level_learned_at": 0,
"version_group": {
"name": "diamond-pearl",
"url": "http://pokeapi.co/api/v2/version-group/8/"
},
"move_learn_method": {
"name": "machine",
"url": "http://pokeapi.co/api/v2/move-learn-method/4/"
}
}
]
}
],
"species": {
"name": "bulbasaur",
"url": "http://pokeapi.co/api/v2/pokemon-species/1/"
},
"stats": [
{
"base_stat": 45,
"effort": 0,
"stat": {
"name": "speed",
"url": "http://pokeapi.co/api/v2/stat/6/"
}
}
],
"types": [
{
"slot": 2,
"type": {
"name": "poison",
"url": "http://pokeapi.co/api/v2/type/4/"
}
}
]
}
"""
bulbasaur = json.loads(response_text)
print(type(bulbasaur), bulbasaur)
# -
abilities = bulbasaur['abilities']
ability_one = bulbasaur['abilities'][0]['ability']
ability_one
# #### 2.2.8. La nada: `None`
# +
x = None
print(type(x))
# -
# ### Notas adicionales
# <blockquote>
# Las listas son objetos mutables, es decir, su longitud puede variar permitiéndonos insertar y eliminar nuevos elementos.
# </blockquote>
x_lista = [1, 2, 3, 5, 'ups...']
x_lista.pop(-1)
x_lista.pop(-1)
x_lista.append(4)
print(x_lista)
# <blockquote>
# Las tuplas por el contrario son objetos inmutables: una vez definidas no
# podemos añadir ni quitar elementos.
# </blockquote>
x_tupla = (1, 2, 3, 5, 'ups...')
x_tupla.pop(-1)
x_tupla.pop(-1)
x_tupla.append(4)
print(x_lista)
# <blockquote>
# Los objetos inmutables en python tiene la propiedad <code>__hash__</code> mientas los mutables no.
# </blockquote>
x_tuple = ('a', 'b', 'c', 'd')
x_tuple.__hash__()
x_list = ['a', 'b', 'c', 'd']
x_list.__hash__()
x_set = {1, 2, 3, 4, 'okay'}
x_set.__hash__()
# ### 1.4. La función `type`
# La función `type` sirve para averiguar el tipo de dato que tenemos en alguna variable.
# Su sintaxis es:
# ```
# type(valor_o_variable)
# ```
# #### Ejemplos
# Las cadenas de texto se pueden marcar con las
# comillas simples `'` ...
type('Hola')
# ... o con las comillas dobles `"`
type("hola")
type(2)
# +
pi = 3.14159216
type(pi)
# +
# La unidad imaginaria se indica yuxtaponiendo `j`
# al cualquier número
z = 5 + 9.1j
type(z)
| lesson-01/lesson-01.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Install imports
# # !pip install panda
# # !pip install numpy
# # !pip install tensorflow
# # !pip install scipy
#
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import scipy
# +
x_data = [
[0,0],
[0,1],
[1,0],
[1,1]
]
y_data = [
[0],
[1],
[1],
[0]
]
# -
x_data = np.array(x_data)
y_data = np.array(y_data)
x_data.shape
y_data.shape
model = keras.Sequential()
model.add(layers.Dense(32, activation="sigmoid"))
model.add(layers.Dense(1, activation="sigmoid"))
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
model.compile(optimizer=optimizer,loss="binary_crossentropy", metrics=["accuracy"])
model.fit(x_data, y_data, batch_size=4, epochs=5000)
model.summary
predict = model.predict(x_data)
print(np.round(predict))
| Xor NN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Hands-On Data Preprocessing in Python
# Learn how to effectively prepare data for successful data analytics
#
# AUTHOR: Dr. <NAME>
#
# # Chapter 7: Classification
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# ## Classification models
#
#
# ### Example of designing a classification model
#
#
# ### Classification Algorithms
#
#
# ## K-Nearest Neighbors (KNN)
#
#
# ### Example of using KNN for classification
applicant_df = pd.read_csv('CustomerLoan.csv')
applicant_df.drop(columns = ['Name'],inplace=True)
applicant_df
newApplicant = applicant_df.iloc[20]
newApplicant
# +
applicant_df = pd.read_csv('CustomerLoan.csv')
applicant_df.drop(index = [20],inplace=True)
fig, ax = plt.subplots()
subset = applicant_df.loc[applicant_df['default']=='Yes']
ax.scatter(subset.income, subset.score, marker='o', label='Default-YES', color='C1')
subset = applicant_df.loc[applicant_df['default']=='NO']
ax.scatter(subset.income, subset.score, marker='D', label='Default-NO', color='C0')
ax.scatter(newApplicant.income, newApplicant.score, marker='*', label='New Applicant', color='black', s=150)
plt.xlabel('income') # set x-axis label
plt.ylabel('score') # set y-axis label
for _, row in applicant_df.iterrows():
ax.annotate(row.Name, (row.income -700, row.score-10))
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, bbox_to_anchor=(1.05, 1))
plt.show()
# -
applicant_df = pd.read_csv('CustomerLoan.csv')
applicant_df['income_Normalized'] = (applicant_df.income - applicant_df.income.min())/(applicant_df.income.max() - applicant_df.income.min())
applicant_df['score_Normalized'] = (applicant_df.score - applicant_df.score.min())/(applicant_df.score.max() - applicant_df.score.min())
applicant_df.drop(columns = ['Name'])
# +
from sklearn.neighbors import KNeighborsClassifier
predictors = ['income_Normalized','score_Normalized']
target = 'default'
Xs = applicant_df[predictors].drop(index=[20])
y= applicant_df[target].drop(index=[20])
knn = KNeighborsClassifier(n_neighbors=4)
knn.fit(Xs, y)
newApplicant = pd.DataFrame({'income_Normalized':
applicant_df.iloc[20].income_Normalized,
'score_Normalized':
applicant_df.iloc[20].score_Normalized},
index = [20])
predict_y = knn.predict(newApplicant)
print(predict_y)
# -
# ## Decision Trees
# ### Example of using Decision Trees for classification
# +
from sklearn.tree import DecisionTreeClassifier, plot_tree
predictors = ['income','score']
target = 'default'
Xs = applicant_df[predictors].drop(index=[20])
y= applicant_df[target].drop(index=[20])
classTree = DecisionTreeClassifier()
classTree.fit(Xs, y)
predict_y = classTree.predict(newApplicant)
print(predict_y)
# -
from sklearn.tree import plot_tree
plot_tree(classTree,
feature_names=predictors,
class_names=y.unique(),
filled=True,
impurity=False)
plt.show()
| Chapter07/Chapter 7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/1_getting_started_roadmap/3_acceptable_data_types_in_monk/3)%20Multi%20label%20-%20Data%20with%20labels%20in%20CSV%20file.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Goals
#
# ### 1. Understand what multi label image classifications is
# - One or more than one label per image
#
# ### 2. Understand multi label - csv type data structure
# ## Table of Contents
#
#
# ## [0. Install](#0)
#
#
# ## [1. Load train data - csv type - Multi Label](#1)
#
#
# ## [2. Understand experiment data-summary](#2)
#
#
# ## [3. Load val data along with train data](#3)
#
#
# ## [4. Understand experiment data-summary](#4)
# # Dataset Details
# - Credits: https://www.kaggle.com/c/planet-understanding-the-amazon-from-space
# ! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=12ia20erQXTmJTk1zj1pZ_VeCPViyAQvm' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=12ia20erQXTmJTk1zj1pZ_VeCPViyAQvm" -O satellite_cls.zip && rm -rf /tmp/cookies.txt
# ! unzip -qq satellite_cls.zip
# <a id='0'></a>
# # Install Monk
#
# - git clone https://github.com/Tessellate-Imaging/monk_v1.git
#
# - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
# - (Select the requirements file as per OS and CUDA version)
# !git clone https://github.com/Tessellate-Imaging/monk_v1.git
# +
# If using Colab install using the commands below
# !cd monk_v1/installation/Misc && pip install -r requirements_colab.txt
# If using Kaggle uncomment the following command
# #!cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt
# Select the requirements file as per OS and CUDA version when using a local system or cloud
# #!cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
# -
# # Imports
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
#Using mxnet-gluon backend
from gluon_prototype import prototype
# ## Creating and managing experiments
# - Provide project name
# - Provide experiment name
gtf = prototype(verbose=1);
gtf.Prototype("Dataset-Type", "exp-csv-multi-label");
# ### This creates files and directories as per the following structure
#
#
# workspace
# |
# |--------Dataset-Type
# |
# |
# |-----exp-foldered
# |
# |-----experiment-state.json
# |
# |-----output
# |
# |------logs (All training logs and graphs saved here)
# |
# |------models (all trained models saved here)
# <a id='1'></a>
# # Load train data - foldered type
# ## Quick mode training
#
# - Using Default Function
# - dataset_path
# - model_name
# - num_epochs
#
#
# ## Dataset folder structure
#
# parent_directory
# |
# |
# |------train (folder can be named anything)
# |------img1.jpg
# |------img2.jpg
# |------.... (and so on)
# |
# |------train_labels.csv (file can be named anything)
#
#
# train_labels.csv has 2 columns (column names could be anything)
# - ID | Labels
# img1.jpg | agriculture cloudy
# img2.jpg | forest hazy
# img3.jpg | road forest rains
# img4.jpg | city
# ..... (and so on)
#
# ### In similar fashion you can structure your own dataset
# ### Along with providing dataset_path you now provide path_to_csv
gtf.Default(dataset_path="satellite_cls/train-jpg",
path_to_csv="satellite_cls/train.csv",
model_name="resnet18_v1",
delimiter = " ", # very important to mention how the labels are separated, via spaces, or commas, etc
num_epochs=5);
# <a id='2'></a>
# # Understand experiment data-summary
# Dataset Details
# Train path: monk_v1/monk/system_check_tests/datasets/dataset_csv_id/train
# Val path: None
# CSV train path: monk_v1/monk/system_check_tests/datasets/dataset_csv_id/train.csv
# CSV val path: None
#
#
# ### Train path is mentioned and Val path is None
#
# ### CSV Train path is mentioned unlike in foldered type dataset
#
# Dataset Parameters:
# Input Size: 224
# Batch Size: 4
# Shuffle: True
# Processors: 4
# Num Classes: 2
#
# ### Data input size is 224 (default setting)
# ### Batch size is 4 (default setting)
# Dataset Numbers
# Num train images: 28335
# Num val images: 12144
# Num classes: 17
#
# ### Since val data is missing, train data is split into train and val with a ratio 70:30 rule
# - 70% of data being used for training
# - 30% of data being used for validation
#
# ### Auto detect number of classes
| study_roadmaps/1_getting_started_roadmap/3_acceptable_data_types_in_monk/3) Multi label - Data with labels in CSV file.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import linear_model
url='data/Advertising_data.csv'
data=pd.read_csv(url)
data.head(2)
data.info()
x=data.TV
y=data.sales
plt.figure(figsize=(16,8))
plt.scatter(x,y,color='red')
plt.show()
rg=linear_model.LinearRegression()
rg.fit(data[["TV"]],data.sales)
rg.predict([[200]])
#cofficient
rg.coef_
#intercept
rg.intercept_
# ## ploting the regression line
plt.figure(figsize=(16,8))
plt.scatter(x,y,color='red')
plt.plot(data[["TV"]],rg.predict(data[["TV"]]),color='blue')
plt.show()
| linear-regression-master/Simple_linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:badeda]
# language: python
# name: conda-env-badeda-py
# ---
# # Really good training pipeline for pytorch EfficientDet
#
# Hi everyone!
#
# My name is <NAME>, I am DL/NLP/CV/TS research engineer. Especially I am in Love with NLP & DL.
#
# Recently I have created kernel for this competition about Weighted Boxes Fusion:
# - [WBF approach for ensemble](https://www.kaggle.com/shonenkov/wbf-approach-for-ensemble)
#
#
# I hope it is useful for you, my friends! If you didn't read this kernel, don't forget to do it! :)
#
#
# Today I would like to share really good training pipeline for this competition using SOTA [EfficientDet: Scalable and Efficient Object Detection](https://arxiv.org/pdf/1911.09070.pdf)
# ## Main Idea
#
# I read [all public kernels about EfficientDet in kaggle community](https://www.kaggle.com/search?q=efficientdet+in%3Anotebooks) and understand that kaggle don't have really good working public kernels with good score. Why? You can see below picture about COCO AP for different architectures, I think everyone should be able to use such a strong tools EfficientDet for own research, lets do it!
#
# <img src='https://miro.medium.com/max/2400/0*ApAKUWtseHcvRV2U.png' width=400>
#
#
# ## Dependencies and imports
import sys
sys.path.insert(0, "timm-efficientdet-pytorch")
sys.path.insert(0, "omegaconf")
# + _kg_hide-input=true _kg_hide-output=true
import torch
import os
from datetime import datetime
import time
import random
import cv2
import pandas as pd
import numpy as np
import albumentations as A
import matplotlib.pyplot as plt
from albumentations.pytorch.transforms import ToTensorV2
from sklearn.model_selection import StratifiedKFold
from torch.utils.data import Dataset,DataLoader
from torch.utils.data.sampler import SequentialSampler, RandomSampler
from glob import glob
SEED = 42
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = True
seed_everything(SEED)
# +
marking = pd.read_csv('/home/hy/dataset/gwd/train.csv')
bboxs = np.stack(marking['bbox'].apply(lambda x: np.fromstring(x[1:-1], sep=',')))
for i, column in enumerate(['x', 'y', 'w', 'h']):
marking[column] = bboxs[:,i]
marking.drop(columns=['bbox'], inplace=True)
# -
# About data splitting you can read [here](https://www.kaggle.com/shonenkov/wbf-approach-for-ensemble):
# + _kg_hide-output=true
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
df_folds = marking[['image_id']].copy()
df_folds.loc[:, 'bbox_count'] = 1
df_folds = df_folds.groupby('image_id').count()
df_folds.loc[:, 'source'] = marking[['image_id', 'source']].groupby('image_id').min()['source']
df_folds.loc[:, 'stratify_group'] = np.char.add(
df_folds['source'].values.astype(str),
df_folds['bbox_count'].apply(lambda x: f'_{x // 15}').values.astype(str)
)
df_folds.loc[:, 'fold'] = 0
for fold_number, (train_index, val_index) in enumerate(skf.split(X=df_folds.index, y=df_folds['stratify_group'])):
df_folds.loc[df_folds.iloc[val_index].index, 'fold'] = fold_number
# -
# ## Albumentations
# +
def get_train_transforms():
return A.Compose(
[
A.RandomSizedCrop(min_max_height=(800, 800), height=1024, width=1024, p=0.5),
A.OneOf([
A.HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit= 0.2,
val_shift_limit=0.2, p=0.9),
A.RandomBrightnessContrast(brightness_limit=0.2,
contrast_limit=0.2, p=0.9),
],p=0.9),
A.ToGray(p=0.01),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
A.Resize(height=512, width=512, p=1),
A.Cutout(num_holes=8, max_h_size=64, max_w_size=64, fill_value=0, p=0.5),
ToTensorV2(p=1.0),
],
p=1.0,
bbox_params=A.BboxParams(
format='pascal_voc',
min_area=0,
min_visibility=0,
label_fields=['labels']
)
)
def get_valid_transforms():
return A.Compose(
[
A.Resize(height=512, width=512, p=1.0),
ToTensorV2(p=1.0),
],
p=1.0,
bbox_params=A.BboxParams(
format='pascal_voc',
min_area=0,
min_visibility=0,
label_fields=['labels']
)
)
# -
# ## Dataset
# +
TRAIN_ROOT_PATH = '/home/hy/dataset/gwd/train'
class DatasetRetriever(Dataset):
def __init__(self, marking, image_ids, transforms=None, test=False):
super().__init__()
self.image_ids = image_ids
self.marking = marking
self.transforms = transforms
self.test = test
def __getitem__(self, index: int):
image_id = self.image_ids[index]
if self.test or random.random() > 0.5:
image, boxes = self.load_image_and_boxes(index)
else:
image, boxes = self.load_cutmix_image_and_boxes(index)
# there is only one class
labels = torch.ones((boxes.shape[0],), dtype=torch.int64)
target = {}
target['boxes'] = boxes
target['labels'] = labels
target['image_id'] = torch.tensor([index])
if self.transforms:
for i in range(10):
sample = self.transforms(**{
'image': image,
'bboxes': target['boxes'],
'labels': labels
})
if len(sample['bboxes']) > 0:
image = sample['image']
target['boxes'] = torch.stack(tuple(map(torch.tensor, zip(*sample['bboxes'])))).permute(1, 0)
target['boxes'][:,[0,1,2,3]] = target['boxes'][:,[1,0,3,2]] #yxyx: be warning
break
return image, target, image_id
def __len__(self) -> int:
return self.image_ids.shape[0]
def load_image_and_boxes(self, index):
image_id = self.image_ids[index]
image = cv2.imread(f'{TRAIN_ROOT_PATH}/{image_id}.jpg', cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)
image /= 255.0
records = self.marking[self.marking['image_id'] == image_id]
boxes = records[['x', 'y', 'w', 'h']].values
boxes[:, 2] = boxes[:, 0] + boxes[:, 2]
boxes[:, 3] = boxes[:, 1] + boxes[:, 3]
return image, boxes
def load_cutmix_image_and_boxes(self, index, imsize=1024):
"""
This implementation of cutmix author: https://www.kaggle.com/nvnnghia
Refactoring and adaptation: https://www.kaggle.com/shonenkov
"""
w, h = imsize, imsize
s = imsize // 2
xc, yc = [int(random.uniform(imsize * 0.25, imsize * 0.75)) for _ in range(2)] # center x, y
indexes = [index] + [random.randint(0, self.image_ids.shape[0] - 1) for _ in range(3)]
result_image = np.full((imsize, imsize, 3), 1, dtype=np.float32)
result_boxes = []
for i, index in enumerate(indexes):
image, boxes = self.load_image_and_boxes(index)
if i == 0:
x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image)
x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image)
elif i == 1: # top right
x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc
x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h
elif i == 2: # bottom left
x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h)
x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, max(xc, w), min(y2a - y1a, h)
elif i == 3: # bottom right
x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h)
x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h)
result_image[y1a:y2a, x1a:x2a] = image[y1b:y2b, x1b:x2b]
padw = x1a - x1b
padh = y1a - y1b
boxes[:, 0] += padw
boxes[:, 1] += padh
boxes[:, 2] += padw
boxes[:, 3] += padh
result_boxes.append(boxes)
result_boxes = np.concatenate(result_boxes, 0)
np.clip(result_boxes[:, 0:], 0, 2 * s, out=result_boxes[:, 0:])
result_boxes = result_boxes.astype(np.int32)
result_boxes = result_boxes[np.where((result_boxes[:,2]-result_boxes[:,0])*(result_boxes[:,3]-result_boxes[:,1]) > 0)]
return result_image, result_boxes
# +
fold_number = 0
train_dataset = DatasetRetriever(
image_ids=df_folds[df_folds['fold'] != fold_number].index.values,
marking=marking,
transforms=get_train_transforms(),
test=False,
)
validation_dataset = DatasetRetriever(
image_ids=df_folds[df_folds['fold'] == fold_number].index.values,
marking=marking,
transforms=get_valid_transforms(),
test=True,
)
# +
image, target, image_id = train_dataset[1]
boxes = target['boxes'].cpu().numpy().astype(np.int32)
numpy_image = image.permute(1,2,0).cpu().numpy()
fig, ax = plt.subplots(1, 1, figsize=(16, 8))
for box in boxes:
cv2.rectangle(numpy_image, (box[1], box[0]), (box[3], box[2]), (0, 1, 0), 2)
ax.set_axis_off()
ax.imshow(numpy_image);
# -
# ## Fitter
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
# +
import warnings
import torch.nn as nn
warnings.filterwarnings("ignore")
class Fitter:
def __init__(self, model, device, config):
self.config = config
self.epoch = 0
self.base_dir = f'./{config.folder}'
if not os.path.exists(self.base_dir):
os.makedirs(self.base_dir)
self.log_path = f'{self.base_dir}/log.txt'
self.best_summary_loss = 10**5
self.model = model
#self.model = nn.DataParallel(model)
self.device = device
param_optimizer = list(self.model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.001},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
self.optimizer = torch.optim.AdamW(self.model.parameters(), lr=config.lr)
#self.optimizer = QHAdamW(self.model.parameters(), lr=config.lr)
self.scheduler = config.SchedulerClass(self.optimizer, **config.scheduler_params)
self.log(f'Fitter prepared. Device is {self.device}')
def fit(self, train_loader, validation_loader):
for e in range(self.config.n_epochs):
if self.config.verbose:
lr = self.optimizer.param_groups[0]['lr']
timestamp = datetime.utcnow().isoformat()
self.log(f'\n{timestamp}\nLR: {lr}')
t = time.time()
summary_loss = self.train_one_epoch(train_loader)
self.log(f'[RESULT]: Train. Epoch: {self.epoch}, summary_loss: {summary_loss.avg:.5f}, time: {(time.time() - t):.5f}')
self.save(f'{self.base_dir}/last-checkpoint.bin')
t = time.time()
summary_loss = self.validation(validation_loader)
self.log(f'[RESULT]: Val. Epoch: {self.epoch}, summary_loss: {summary_loss.avg:.5f}, time: {(time.time() - t):.5f}')
if summary_loss.avg < self.best_summary_loss:
self.best_summary_loss = summary_loss.avg
self.model.eval()
self.save(f'{self.base_dir}/best-checkpoint-{str(self.epoch).zfill(3)}epoch.bin')
for path in sorted(glob(f'{self.base_dir}/best-checkpoint-*epoch.bin'))[:-3]:
os.remove(path)
if self.config.validation_scheduler:
self.scheduler.step(metrics=summary_loss.avg)
self.epoch += 1
def validation(self, val_loader):
self.model.eval()
summary_loss = AverageMeter()
t = time.time()
for step, (images, targets, image_ids) in enumerate(val_loader):
if self.config.verbose:
if step % self.config.verbose_step == 0:
print(
f'Val Step {step}/{len(val_loader)}, ' + \
f'summary_loss: {summary_loss.avg:.5f}, ' + \
f'time: {(time.time() - t):.5f}', end='\r'
)
with torch.no_grad():
images = torch.stack(images)
batch_size = images.shape[0]
images = images.to(self.device).float()
boxes = [target['boxes'].to(self.device).float() for target in targets]
labels = [target['labels'].to(self.device).float() for target in targets]
loss, _, _ = self.model(images, boxes, labels)
summary_loss.update(loss.detach().item(), batch_size)
return summary_loss
def train_one_epoch(self, train_loader):
self.model.train()
summary_loss = AverageMeter()
t = time.time()
for step, (images, targets, image_ids) in enumerate(train_loader):
if self.config.verbose:
if step % self.config.verbose_step == 0:
print(
f'Train Step {step}/{len(train_loader)}, ' + \
f'summary_loss: {summary_loss.avg:.5f}, ' + \
f'time: {(time.time() - t):.5f}', end='\r'
)
images = torch.stack(images)
images = images.to(self.device).float()
batch_size = images.shape[0]
boxes = [target['boxes'].to(self.device).float() for target in targets]
labels = [target['labels'].to(self.device).float() for target in targets]
self.optimizer.zero_grad()
loss, _, _ = self.model(images, boxes, labels)
loss.backward()
summary_loss.update(loss.detach().item(), batch_size)
self.optimizer.step()
if self.config.step_scheduler:
self.scheduler.step()
return summary_loss
def save(self, path):
self.model.eval()
torch.save({
'model_state_dict': self.model.model.state_dict(),
'optimizer_state_dict': self.optimizer.state_dict(),
'scheduler_state_dict': self.scheduler.state_dict(),
'best_summary_loss': self.best_summary_loss,
'epoch': self.epoch,
}, path)
def load(self, path):
checkpoint = torch.load(path)
self.model.model.load_state_dict(checkpoint['model_state_dict'])
self.optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
self.scheduler.load_state_dict(checkpoint['scheduler_state_dict'])
self.best_summary_loss = checkpoint['best_summary_loss']
self.epoch = checkpoint['epoch'] + 1
def log(self, message):
if self.config.verbose:
print(message)
with open(self.log_path, 'a+') as logger:
logger.write(f'{message}\n')
# +
class TrainGlobalConfig:
num_workers = 4
batch_size = 8
#n_epochs = 40
n_epochs = 50
lr = 0.0002
folder = '0525_effdet5-cutmix-augmix_test'
# -------------------
verbose = True
verbose_step = 1
# -------------------
# --------------------
step_scheduler = False # do scheduler.step after optimizer.step
validation_scheduler = True # do scheduler.step after validation stage loss
# SchedulerClass = torch.optim.lr_scheduler.OneCycleLR
# scheduler_params = dict(
# max_lr=0.001,
# epochs=n_epochs,
# steps_per_epoch=int(len(train_dataset) / batch_size),
# pct_start=0.1,
# anneal_strategy='cos',
# final_div_factor=10**5
# )
SchedulerClass = torch.optim.lr_scheduler.ReduceLROnPlateau
scheduler_params = dict(
mode='min',
factor=0.5,
patience=1,
verbose=False,
threshold=0.0001,
threshold_mode='abs',
cooldown=0,
min_lr=1e-8,
eps=1e-08
)
# --------------------
# +
def collate_fn(batch):
return tuple(zip(*batch))
def run_training():
device = torch.device('cuda:0')
net.to(device)
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=TrainGlobalConfig.batch_size,
sampler=RandomSampler(train_dataset),
pin_memory=False,
drop_last=True,
num_workers=TrainGlobalConfig.num_workers,
collate_fn=collate_fn,
)
val_loader = torch.utils.data.DataLoader(
validation_dataset,
batch_size=TrainGlobalConfig.batch_size,
num_workers=TrainGlobalConfig.num_workers,
shuffle=False,
sampler=SequentialSampler(validation_dataset),
pin_memory=False,
collate_fn=collate_fn,
)
fitter = Fitter(model=net, device=device, config=TrainGlobalConfig)
fitter.fit(train_loader, val_loader)
# +
from effdet import get_efficientdet_config, EfficientDet, DetBenchTrain
from effdet.efficientdet import HeadNet
def get_net():
config = get_efficientdet_config('tf_efficientdet_d5')
net = EfficientDet(config, pretrained_backbone=False)
checkpoint = torch.load('efficientdet_d5-ef44aea8.pth')
net.load_state_dict(checkpoint)
config.num_classes = 1
config.image_size = 512
net.class_net = HeadNet(config, num_outputs=config.num_classes, norm_kwargs=dict(eps=.001, momentum=.01))
return DetBenchTrain(net, config)
net = get_net()
# -
run_training()
# ### Thank you for reading my kernel!
#
# So, I have prepared good training SOTA-model baseline for you, my friends! I have used n_epochs = 40 and have got best checkpoint single model that gives 0.7176 LB. You can see [here](https://www.kaggle.com/shonenkov/inference-efficientdet) inference kernel.
#
# Just recently I have started publishing my works, if you like this format of notebooks I would like continue to make kernels.
| training-efficientdet-parallel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (cs224w)
# language: python
# name: cs224w
# ---
import snap
import pickle
from matplotlib import pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
from sklearn.cluster import KMeans
import pandas as pd
import matplotlib
# ### Load the adjacency matrix
f = open('./data/sensor_graph/adj_mx.pkl', 'rb') # 'r' for reading; can be omitted
data = pickle.load(f) # load file content as mydict
f.close()
# ### Plot to visualize
ax = sns.heatmap(data[2], vmin=0.5, vmax=1, cmap="YlGnBu")
plt.show()
data[1]
# ### However, the edgelist does not contain self-loops
# Set the diagonal entries to be 0
for i in range(data[2].shape[0]):
data[2][i, i] = 0
ax = sns.heatmap(data[2], vmin=0.5, vmax=1, cmap="YlGnBu")
plt.show()
# ### Produce edge list from the weighted matrix
# Also, notice that node 26 is missing from the weight and matrix file.
data[2][26,:]
len(np.where(data[2]!=0.0)[0])
nonZeroEleInd = np.where(data[2]!=0.0)
with open('METR-LA.txt', 'w') as f:
for index in range(len(nonZeroEleInd[0])):
x = nonZeroEleInd[0][index]
y = nonZeroEleInd[1][index]
f.write('{} {} {}\n'.format(x, y, data[2][x, y]))
a = list(set(nonZeroEleInd[0]).union(set(nonZeroEleInd[1])))
sum(xrange(a[0],a[-1]+1)) - sum(a)
with open('embed_file.txt', 'w') as f:
f.write('1234')
with open('embed_file.txt', 'r') as f:
a = f.read()
a
'{}.txt'.format(a)
| figures/produceEdgeList.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Environment (conda_pytorch_p36_fresh)
# language: python
# name: conda_pytorch_p36_fresh
# ---
# +
import numpy as np
import scipy
import pandas as pd
import random, os, h5py, math, time, glob
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch import optim
import torch.nn.functional as F
import sklearn
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.preprocessing import OneHotEncoder
import keras
import keras.backend as K
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute
from keras.layers import Concatenate, Reshape, Softmax, Conv2DTranspose, Embedding, Multiply
from keras import Model
import keras.optimizers
from keras.models import Sequential, Model, load_model
import tensorflow as tf
#tf.disable_v2_behavior()
from mpradragonn_predictor_pytorch import *
class IdentityEncoder :
def __init__(self, seq_len, channel_map) :
self.seq_len = seq_len
self.n_channels = len(channel_map)
self.encode_map = channel_map
self.decode_map = {
nt: ix for ix, nt in self.encode_map.items()
}
def encode(self, seq) :
encoding = np.zeros((self.seq_len, self.n_channels))
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
return encoding
def encode_inplace(self, seq, encoding) :
for i in range(len(seq)) :
if seq[i] in self.encode_map :
channel_ix = self.encode_map[seq[i]]
encoding[i, channel_ix] = 1.
def encode_inplace_sparse(self, seq, encoding_mat, row_index) :
raise NotImplementError()
def decode(self, encoding) :
seq = ''
for pos in range(0, encoding.shape[0]) :
argmax_nt = np.argmax(encoding[pos, :])
max_nt = np.max(encoding[pos, :])
seq += self.decode_map[argmax_nt]
return seq
def decode_sparse(self, encoding_mat, row_index) :
raise NotImplementError()
# -
#Load pytorch MPRA-DragoNN model skeleton
analyzer = DragoNNClassifier(run_name='mpradragonn_pytorch', seq_len=145)
# +
#Load MPRA-DragoNN Keras predictor model
#Specfiy file path to pre-trained predictor network
def load_data(data_name, valid_set_size=0.05, test_set_size=0.05) :
#Load cached dataframe
cached_dict = pickle.load(open(data_name, 'rb'))
x_train = cached_dict['x_train']
y_train = cached_dict['y_train']
x_test = cached_dict['x_test']
y_test = cached_dict['y_test']
x_train = np.moveaxis(x_train, 3, 1)
x_test = np.moveaxis(x_test, 3, 1)
return x_train, x_test
def load_predictor_model(model_path) :
saved_model = Sequential()
# sublayer 1
saved_model.add(Conv1D(48, 3, padding='same', activation='relu', input_shape=(145, 4), name='dragonn_conv1d_1_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_1_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_1_copy'))
saved_model.add(Conv1D(64, 3, padding='same', activation='relu', name='dragonn_conv1d_2_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_2_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_2_copy'))
saved_model.add(Conv1D(100, 3, padding='same', activation='relu', name='dragonn_conv1d_3_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_3_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_3_copy'))
saved_model.add(Conv1D(150, 7, padding='same', activation='relu', name='dragonn_conv1d_4_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_4_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_4_copy'))
saved_model.add(Conv1D(300, 7, padding='same', activation='relu', name='dragonn_conv1d_5_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_5_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_5_copy'))
saved_model.add(MaxPooling1D(3))
# sublayer 2
saved_model.add(Conv1D(200, 7, padding='same', activation='relu', name='dragonn_conv1d_6_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_6_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_6_copy'))
saved_model.add(Conv1D(200, 3, padding='same', activation='relu', name='dragonn_conv1d_7_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_7_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_7_copy'))
saved_model.add(Conv1D(200, 3, padding='same', activation='relu', name='dragonn_conv1d_8_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_8_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_8_copy'))
saved_model.add(MaxPooling1D(4))
# sublayer 3
saved_model.add(Conv1D(200, 7, padding='same', activation='relu', name='dragonn_conv1d_9_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_9_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_9_copy'))
saved_model.add(MaxPooling1D(4))
saved_model.add(Flatten())
saved_model.add(Dense(100, activation='relu', name='dragonn_dense_1_copy'))
saved_model.add(BatchNormalization(name='dragonn_batchnorm_10_copy'))
saved_model.add(Dropout(0.1, name='dragonn_dropout_10_copy'))
saved_model.add(Dense(12, activation='linear', name='dragonn_dense_2_copy'))
saved_model.compile(
loss= "mean_squared_error",
optimizer=keras.optimizers.SGD(lr=0.1)
)
saved_model.load_weights(model_path)
return saved_model
#Specfiy file path to pre-trained predictor network
saved_predictor_model_path = '../seqprop/examples/mpradragonn/pretrained_deep_factorized_model.hdf5'
saved_predictor = load_predictor_model(saved_predictor_model_path)
acgt_encoder = IdentityEncoder(145, {'A':0, 'C':1, 'G':2, 'T':3})
#Get latent space predictor
saved_predictor_w_dense = Model(
inputs = saved_predictor.inputs,
outputs = saved_predictor.outputs + [saved_predictor.get_layer('dragonn_dropout_1_copy').output]
)
saved_predictor_w_dense.compile(loss='mse', optimizer=keras.optimizers.SGD(lr=0.1))
# -
saved_predictor.summary()
# +
#Collect weights from keras model
conv_1_weight, conv_1_bias = saved_predictor.get_layer('dragonn_conv1d_1_copy').get_weights()
conv_1_weight = np.expand_dims(conv_1_weight, axis=1)
gamma_1, beta_1, moving_mean_1, moving_var_1 = saved_predictor.get_layer('dragonn_batchnorm_1_copy').get_weights()
conv_2_weight, conv_2_bias = saved_predictor.get_layer('dragonn_conv1d_2_copy').get_weights()
conv_2_weight = np.expand_dims(conv_2_weight, axis=1)
gamma_2, beta_2, moving_mean_2, moving_var_2 = saved_predictor.get_layer('dragonn_batchnorm_2_copy').get_weights()
conv_3_weight, conv_3_bias = saved_predictor.get_layer('dragonn_conv1d_3_copy').get_weights()
conv_3_weight = np.expand_dims(conv_3_weight, axis=1)
gamma_3, beta_3, moving_mean_3, moving_var_3 = saved_predictor.get_layer('dragonn_batchnorm_3_copy').get_weights()
conv_4_weight, conv_4_bias = saved_predictor.get_layer('dragonn_conv1d_4_copy').get_weights()
conv_4_weight = np.expand_dims(conv_4_weight, axis=1)
gamma_4, beta_4, moving_mean_4, moving_var_4 = saved_predictor.get_layer('dragonn_batchnorm_4_copy').get_weights()
conv_5_weight, conv_5_bias = saved_predictor.get_layer('dragonn_conv1d_5_copy').get_weights()
conv_5_weight = np.expand_dims(conv_5_weight, axis=1)
gamma_5, beta_5, moving_mean_5, moving_var_5 = saved_predictor.get_layer('dragonn_batchnorm_5_copy').get_weights()
conv_6_weight, conv_6_bias = saved_predictor.get_layer('dragonn_conv1d_6_copy').get_weights()
conv_6_weight = np.expand_dims(conv_6_weight, axis=1)
gamma_6, beta_6, moving_mean_6, moving_var_6 = saved_predictor.get_layer('dragonn_batchnorm_6_copy').get_weights()
conv_7_weight, conv_7_bias = saved_predictor.get_layer('dragonn_conv1d_7_copy').get_weights()
conv_7_weight = np.expand_dims(conv_7_weight, axis=1)
gamma_7, beta_7, moving_mean_7, moving_var_7 = saved_predictor.get_layer('dragonn_batchnorm_7_copy').get_weights()
conv_8_weight, conv_8_bias = saved_predictor.get_layer('dragonn_conv1d_8_copy').get_weights()
conv_8_weight = np.expand_dims(conv_8_weight, axis=1)
gamma_8, beta_8, moving_mean_8, moving_var_8 = saved_predictor.get_layer('dragonn_batchnorm_8_copy').get_weights()
conv_9_weight, conv_9_bias = saved_predictor.get_layer('dragonn_conv1d_9_copy').get_weights()
conv_9_weight = np.expand_dims(conv_9_weight, axis=1)
gamma_9, beta_9, moving_mean_9, moving_var_9 = saved_predictor.get_layer('dragonn_batchnorm_9_copy').get_weights()
dense_10_weight, dense_10_bias = saved_predictor.get_layer('dragonn_dense_1_copy').get_weights()
gamma_10, beta_10, moving_mean_10, moving_var_10 = saved_predictor.get_layer('dragonn_batchnorm_10_copy').get_weights()
dense_11_weight, dense_11_bias = saved_predictor.get_layer('dragonn_dense_2_copy').get_weights()
# +
print(conv_1_weight.shape)
print(conv_1_bias.shape)
print("----------")
print(beta_1.shape)
print(gamma_1.shape)
print(moving_mean_1.shape)
print(moving_var_1.shape)
print("----------")
print(conv_2_weight.shape)
print(conv_2_bias.shape)
print("----------")
print(beta_2.shape)
print(gamma_2.shape)
print(moving_mean_2.shape)
print(moving_var_2.shape)
# +
print(analyzer.cnn.conv1.weight.shape)
print(analyzer.cnn.conv1.bias.shape)
print("----------")
print(analyzer.cnn.norm1.bias.shape)
print(analyzer.cnn.norm1.weight.shape)
print(analyzer.cnn.norm1.running_mean.shape)
print(analyzer.cnn.norm1.running_var.shape)
print("----------")
print(analyzer.cnn.conv2.weight.shape)
print(analyzer.cnn.conv2.bias.shape)
print("----------")
print(analyzer.cnn.norm2.bias.shape)
print(analyzer.cnn.norm2.weight.shape)
print(analyzer.cnn.norm2.running_mean.shape)
print(analyzer.cnn.norm2.running_var.shape)
# +
#Manually transfer model weights from keras to pytorch
with torch.no_grad() :
analyzer.cnn.conv1.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_1_weight, (3, 2, 1, 0))))
analyzer.cnn.conv1.bias = nn.Parameter(torch.FloatTensor(conv_1_bias))
analyzer.cnn.norm1.bias = nn.Parameter(torch.FloatTensor(beta_1))
analyzer.cnn.norm1.weight = nn.Parameter(torch.FloatTensor(gamma_1))
analyzer.cnn.norm1.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_1))
analyzer.cnn.norm1.running_var = nn.Parameter(torch.FloatTensor(moving_var_1))
analyzer.cnn.conv2.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_2_weight, (3, 2, 1, 0))))
analyzer.cnn.conv2.bias = nn.Parameter(torch.FloatTensor(conv_2_bias))
analyzer.cnn.norm2.bias = nn.Parameter(torch.FloatTensor(beta_2))
analyzer.cnn.norm2.weight = nn.Parameter(torch.FloatTensor(gamma_2))
analyzer.cnn.norm2.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_2))
analyzer.cnn.norm2.running_var = nn.Parameter(torch.FloatTensor(moving_var_2))
analyzer.cnn.conv3.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_3_weight, (3, 2, 1, 0))))
analyzer.cnn.conv3.bias = nn.Parameter(torch.FloatTensor(conv_3_bias))
analyzer.cnn.norm3.bias = nn.Parameter(torch.FloatTensor(beta_3))
analyzer.cnn.norm3.weight = nn.Parameter(torch.FloatTensor(gamma_3))
analyzer.cnn.norm3.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_3))
analyzer.cnn.norm3.running_var = nn.Parameter(torch.FloatTensor(moving_var_3))
analyzer.cnn.conv4.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_4_weight, (3, 2, 1, 0))))
analyzer.cnn.conv4.bias = nn.Parameter(torch.FloatTensor(conv_4_bias))
analyzer.cnn.norm4.bias = nn.Parameter(torch.FloatTensor(beta_4))
analyzer.cnn.norm4.weight = nn.Parameter(torch.FloatTensor(gamma_4))
analyzer.cnn.norm4.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_4))
analyzer.cnn.norm4.running_var = nn.Parameter(torch.FloatTensor(moving_var_4))
analyzer.cnn.conv5.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_5_weight, (3, 2, 1, 0))))
analyzer.cnn.conv5.bias = nn.Parameter(torch.FloatTensor(conv_5_bias))
analyzer.cnn.norm5.bias = nn.Parameter(torch.FloatTensor(beta_5))
analyzer.cnn.norm5.weight = nn.Parameter(torch.FloatTensor(gamma_5))
analyzer.cnn.norm5.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_5))
analyzer.cnn.norm5.running_var = nn.Parameter(torch.FloatTensor(moving_var_5))
analyzer.cnn.conv6.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_6_weight, (3, 2, 1, 0))))
analyzer.cnn.conv6.bias = nn.Parameter(torch.FloatTensor(conv_6_bias))
analyzer.cnn.norm6.bias = nn.Parameter(torch.FloatTensor(beta_6))
analyzer.cnn.norm6.weight = nn.Parameter(torch.FloatTensor(gamma_6))
analyzer.cnn.norm6.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_6))
analyzer.cnn.norm6.running_var = nn.Parameter(torch.FloatTensor(moving_var_6))
analyzer.cnn.conv7.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_7_weight, (3, 2, 1, 0))))
analyzer.cnn.conv7.bias = nn.Parameter(torch.FloatTensor(conv_7_bias))
analyzer.cnn.norm7.bias = nn.Parameter(torch.FloatTensor(beta_7))
analyzer.cnn.norm7.weight = nn.Parameter(torch.FloatTensor(gamma_7))
analyzer.cnn.norm7.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_7))
analyzer.cnn.norm7.running_var = nn.Parameter(torch.FloatTensor(moving_var_7))
analyzer.cnn.conv8.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_8_weight, (3, 2, 1, 0))))
analyzer.cnn.conv8.bias = nn.Parameter(torch.FloatTensor(conv_8_bias))
analyzer.cnn.norm8.bias = nn.Parameter(torch.FloatTensor(beta_8))
analyzer.cnn.norm8.weight = nn.Parameter(torch.FloatTensor(gamma_8))
analyzer.cnn.norm8.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_8))
analyzer.cnn.norm8.running_var = nn.Parameter(torch.FloatTensor(moving_var_8))
analyzer.cnn.conv9.weight = nn.Parameter(torch.FloatTensor(np.transpose(conv_9_weight, (3, 2, 1, 0))))
analyzer.cnn.conv9.bias = nn.Parameter(torch.FloatTensor(conv_9_bias))
analyzer.cnn.norm9.bias = nn.Parameter(torch.FloatTensor(beta_9))
analyzer.cnn.norm9.weight = nn.Parameter(torch.FloatTensor(gamma_9))
analyzer.cnn.norm9.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_9))
analyzer.cnn.norm9.running_var = nn.Parameter(torch.FloatTensor(moving_var_9))
analyzer.cnn.fc10.weight = nn.Parameter(torch.FloatTensor(np.transpose(dense_10_weight, (1, 0))))
analyzer.cnn.fc10.bias = nn.Parameter(torch.FloatTensor(dense_10_bias))
analyzer.cnn.norm10.bias = nn.Parameter(torch.FloatTensor(beta_10))
analyzer.cnn.norm10.weight = nn.Parameter(torch.FloatTensor(gamma_10))
analyzer.cnn.norm10.running_mean = nn.Parameter(torch.FloatTensor(moving_mean_10))
analyzer.cnn.norm10.running_var = nn.Parameter(torch.FloatTensor(moving_var_10))
analyzer.cnn.fc11.weight = nn.Parameter(torch.FloatTensor(np.transpose(dense_11_weight, (1, 0))))
analyzer.cnn.fc11.bias = nn.Parameter(torch.FloatTensor(dense_11_bias))
analyzer.save_model(epoch=10)
# +
#Reload pytorch model and compare predict function to keras model
analyzer = DragoNNClassifier(run_name='mpradragonn_pytorch', seq_len=145)
# +
n_seqs_to_test = 64
sequence_template = 'N' * 145
#Build random data
random_seqs = [
''.join([
sequence_template[j] if sequence_template[j] != 'N' else np.random.choice(['A', 'C', 'G', 'T'])
for j in range(len(sequence_template))
]) for i in range(n_seqs_to_test)
]
onehots_random = np.concatenate([
np.expand_dims(acgt_encoder.encode(rand_seq), axis=0) for rand_seq in random_seqs
], axis=0)
# +
#Predict fitness using keras model
prob_random_keras, debug_keras = saved_predictor_w_dense.predict(x=[onehots_random], batch_size=32)
prob_random_keras = np.ravel(prob_random_keras[:, 5])
#Predict fitness using pytorch model
prob_random_pytorch = analyzer.predict_model(random_seqs)
prob_random_pytorch = np.ravel(prob_random_pytorch)
# +
for i, [p_keras, p_pytorch] in enumerate(zip(prob_random_keras.tolist(), prob_random_pytorch.tolist())) :
print("--------------------")
print("Sequence " + str(i))
print("prob (keras) = " + str(round(p_keras, 4)))
print("prob (pytorch) = " + str(round(p_pytorch, 4)))
# -
| analysis/fbgan/lift_mpradragonn_from_keras_to_pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cross_decomposition import CCA
# # indexing workflow:
# - for each correlated enhancer, use fangming's indexes to get the corresponding gene name
# - my enhancer index matches fangming's
#
# # difficult part is getting promoter kmers
# # Import kmers
# +
kmer_nums = [2, 3, 4, 5]
enhancer_kmer_list =[]
promoter_kmer_list = []
data_path_enh = 'data/enhancers_chromsort_kmer_{}_bases_1000.tsv'
data_path_prom = 'data/promoter_sort_kmer_{}_bases_1000.tsv'
for k in kmer_nums:
en = pd.read_csv(data_path_enh.format(k), sep='\t').set_index('0')
prom = pd.read_csv(data_path_prom.format(k), sep='\t').set_index('0')
enhancer_kmer_list.append(en)
promoter_kmer_list.append(prom)
# -
enhancer_kmers = pd.concat(enhancer_kmer_list, axis=1)
promoter_kmers = pd.concat(promoter_kmer_list, axis=1)
enhancer_kmers.head()
promoter_kmers.head()
# # import genes
ref_path = '/cndd2/ethan/projects/scf_paper/modeling/working_data/promoter/mm10_promoter{}.{}'
genes=pd.read_csv(ref_path.format('', 'bed'), sep='\t', header=None)
enhs_list = pd.read_csv('/cndd2/ethan/projects/enh_gene_linkage/enhancer_sequence/data/enhancers_chromsort_center_1000.bed', sep='\t', header=None)
genes['kmer_format'] = '>' + genes[0] + ':' + genes[1].astype(str) + '-' + genes[2].astype(str) + '\n'
enhs_list['kmer_format'] = '>' + enhs_list[0] + ':' + enhs_list[1].astype(str) + '-' + enhs_list[2].astype(str)
# # get sig pairs
import pickle
sig_pairs = pickle.load(open('/cndd2/ethan/projects/enh_gene_linkage/epigenetic_uncorrelation/sig_pairs.pkl', 'rb'))
mcg = sig_pairs['mCG']
atac= sig_pairs['Open Chromatin']
gene_mcg = []
enh_mcg = []
for i in mcg:
tmp = i.split(':')
gene_mcg.append(tmp[0])
enh_mcg.append(tmp[1])
enhs_list.head()
use_enhs = enhs_list.loc[np.array(enh_mcg).astype(int)]
use_enhs.shape
use_enhs.head()
use_enhs['paired_gene'] = gene_mcg
use_enhs.head()
my_gene_list = pd.read_csv('/cndd2/ethan/projects/enh_gene_linkage/enhancer_sequence/data/promoter_sort_center_1000.bed', sep='\t', header=None)
my_gene_list.head()
my_gene_list[3] = [i.split('.')[0] for i in my_gene_list[3]]
genes = my_gene_list
genes['kmer_format'] = '>' + genes[0] + ':' + genes[1].astype(str) + '-' + genes[2].astype(str)
use_genes = genes.loc[genes['kmer_format'].isin(promoter_kmers.index)]
use_genes = use_genes.set_index('kmer_format').loc[promoter_kmers.index]
use_genes.head()
use_genes.shape
promoter_kmers.shape
np.sum(use_genes.index.isin(promoter_kmers.index))
use_genes = use_genes[~use_genes.index.duplicated()]
promoter_kmers['gene'] = use_genes[3].values
promoter_kmers.head()
enhancer_kmers.head()
use_enhs.head()
use_kmer_enhs = enhancer_kmers.loc[use_enhs.kmer_format]
use_kmer_enhs.shape
use_enhs.shape
use_kmer_enhs['paired_gene'] = use_enhs['paired_gene'].values
use_kmer_enhs.head()
use_kmer_enhs.to_csv('/cndd2/ethan/projects/enh_gene_linkage/promoter_sequence/data/enhancer_kmers_concat_2kb.tsv', sep='\t')
promoter_kmers = promoter_kmers.set_index('gene')
gene_pairs = promoter_kmers.loc[use_kmer_enhs.paired_gene]
use_promoter_kmers = []
missing_genes = []
for i in use_kmer_enhs.paired_gene:
if i in promoter_kmers.index.tolist():
use_promoter_kmers.append(promoter_kmers.loc[i])
else:
missing_genes.append(i)
len(missing_genes)
use_kmer_enhs = use_kmer_enhs.loc[~(use_kmer_enhs.paired_gene.isin(missing_genes))]
use_promoter_kmers= np.array(use_promoter_kmers)
use_promoter_kmers.shape
print(use_kmer_enhs.shape)
use_kmer_enhs.head()
use_kmer_enhs.shape
use_enhs.head()
use_enhs.shape
tmp_kmer_enhs = enhancer_kmers.loc[use_enhs.kmer_format]
tmp_kmer_enhs['paired_gene'] = use_enhs.paired_gene.values
tmp_kmer_enhs.head()
tmp_kmer_enhs.shape
tmp_kmer_enhs.to_csv('/cndd2/ethan/projects/enh_gene_linkage/promoter_sequence/data/enhancer_kmers_concat.tsv', sep='\t')
# # try CCA
cca = CCA()
enh, promoter = cca.fit_transform(use_kmer_enhs.drop('paired_gene', axis=1).values, use_promoter_kmers)
import matplotlib.pyplot as plt
plt.scatter(enh[:, 1], promoter[:, 1], s= 1)
plt.xlabel('Enhancer 3mers')
plt.ylabel('Promoter 3mers')
plt.title('second cannonical component')
cca.get_params()
# +
#cca.score(use_kmer_enhs.drop('paired_gene', axis=1).values, use_promoter_kmers)
# -
plt.scatter(enh[:, 0], promoter[:, 0], s =1)
plt.xlabel('Enhancer 3mers')
plt.ylabel('Promoter 3mers')
plt.title('First cannonical component')
from scipy.stats import spearmanr
spearmanr(enh[:, 1], promoter[:, 1])
spearmanr(enh[:, 0], promoter[:, 0])
# # get cannonical loadings
use_enh_array = use_kmer_enhs.drop('paired_gene', axis=1).values
kmer_names = use_kmer_enhs.columns.to_list()[:-1]
promoter_loadings = []
enhancer_loadings = []
for i in range(use_enh_array.shape[1]):
enhancer_loadings.append(spearmanr(use_enh_array[:, i], enh[:, 0])[0])
promoter_loadings.append(spearmanr(use_promoter_kmers[:, i], promoter[:, 0])[0])
# +
fig, ax = plt.subplots(figsize=(12, 10))
ax.plot(np.arange(len(kmer_names)), enhancer_loadings/np.mean(np.abs(enhancer_loadings)), '-o', label='Enhancer')
ax.plot(np.arange(len(kmer_names)), promoter_loadings/np.mean(np.abs(promoter_loadings)), '-o', label='Promoter')
ax.set_xticks(np.arange(len(kmer_names)))
ax.set_xticklabels(kmer_names, rotation=90)
ax.set_ylabel('Normalized cannonical loading')
ax.set_title('Cannonical Loadings')
ax.legend()
# -
plt.scatter(enhancer_loadings, promoter_loadings)
spearmanr(enhancer_loadings, promoter_loadings)
# # get top 10 motifs for each type
top_n = 30
tmp_enh = enhancer_loadings
tmp_prom = promoter_loadings
enh_motif = []
prom_motif = []
for i in range(top_n):
prom_max = np.argmax(np.abs(tmp_prom))
enh_max = np.argmax(np.abs(tmp_enh))
enh_motif.append(kmer_names[enh_max])
prom_motif.append(kmer_names[prom_max])
tmp_enh[enh_max] = 0
tmp_prom[prom_max] = 0
enh_motif
prom_motif
# # shuffle controll
promoter_kmers.head()
enhancer_kmers.head()
# # sampling
num_samplings = 10
sample_pairs = []
for i in range(num_samplings):
enh_idx = np.random.randint(0, enhancer_kmers.shape[0], size = 10000)
prom_idx = np.random.randint(0, promoter_kmers.shape[1], size = 10000)
sample_pairs.append((enh_idx, prom_idx))
shuffle_cca = CCA()
sample_pairs[0]
enhancer_kmers.values.shape
enhancer_kmers.values[enh_idx, :]
cca1_r = []
cca2_r = []
for i in sample_pairs:
enh_idx = i[0]
prom_idx = i[1]
x, y = shuffle_cca.fit_transform(enhancer_kmers.values[enh_idx, :], promoter_kmers.values[prom_idx, :])
cca1_r.append(spearmanr(x[:, 0], y[:, 0])[0])
cca2_r.append(spearmanr(x[:, 1], y[:, 1])[0])
print(cca1_r)
print(cca2_r)
np.mean(cca1_r), np.mean(cca2_r)
# # try against pairs within a megabase
# - use fangmings to_evals which represents all pairs in a megabase
# - draw random pairs from it
pairs_in_mb = pd.read_csv('/cndd2/ethan/projects/enh_gene_linkage/non-inear-activation/get_best_gene_in_1mb/evals_in_mb.tsv', sep='\t')
pairs_in_mb.head()
shuffle_cca = CCA()
num_samplings = 5
sample_ids = []
for i in range(num_samplings):
sample = np.random.choice(np.arange(pairs_in_mb.shape[0]), size =20000, replace=False)
sample_ids.append(sample)
cca1_r = []
cca2_r = []
for sample in sample_ids:
sample_pairs = pairs_in_mb.loc[sample]
sample_genes = sample_pairs.gene
sample_ens_id = sample_pairs.ens
sample_ens = enhs_list.loc[sample_ens_id]
sample_promoters = genes.set_index(3).loc[sample_genes]
sample_ens = sample_ens.drop(3, axis=1)
bad_ix = sample_promoters.isna().sum(axis=1) > 0
sample_promoters = sample_promoters.loc[~bad_ix.values]
sample_ens = sample_ens.loc[~bad_ix.values]
sample_promoter_kmers = promoter_kmers.loc[sample_promoters.index.values]
sample_enhancer_kmers = enhancer_kmers.loc[sample_ens.kmer_format.values]
bad = sample_promoter_kmers.isna().sum(axis=1)> 0
sample_promoter_kmers = sample_promoter_kmers.loc[~bad.values]
sample_enhancer_kmers = sample_enhancer_kmers.loc[~bad.values]
x, y = shuffle_cca.fit_transform(sample_enhancer_kmers.values, sample_promoter_kmers.values)
cca1_r.append(spearmanr(x[:, 0], y[:, 0])[0])
cca2_r.append(spearmanr(x[:, 1], y[:, 1])[0])
cca1_r
cca2_r
np.mean(cca1_r)
np.mean(cca2_r)
# # are the features learned different than random features
num_samplings = 10
sample_pairs = []
for i in range(num_samplings):
enh_idx = np.random.randint(0, enhancer_kmers.shape[0], size = 10000)
prom_idx = np.random.randint(0, promoter_kmers.shape[1], size = 10000)
sample_pairs.append((enh_idx, prom_idx))
cca1_r = []
cca2_r = []
for i in sample_pairs:
enh_idx = i[0]
prom_idx = i[1]
x, y = cca.transform(enhancer_kmers.values[enh_idx, :], promoter_kmers.values[prom_idx, :])
cca1_r.append(spearmanr(x[:, 0], y[:, 0])[0])
cca2_r.append(spearmanr(x[:, 1], y[:, 1])[0])
cca1_r
cca2_r
np.mean(cca1_r)
np.mean(cca2_r)
cca1_r = []
cca2_r = []
for sample in sample_ids:
sample_pairs = pairs_in_mb.loc[sample]
sample_genes = sample_pairs.gene
sample_ens_id = sample_pairs.ens
sample_ens = enhs_list.loc[sample_ens_id]
sample_promoters = genes.set_index(3).loc[sample_genes]
sample_ens = sample_ens.drop(3, axis=1)
bad_ix = sample_promoters.isna().sum(axis=1) > 0
sample_promoters = sample_promoters.loc[~bad_ix.values]
sample_ens = sample_ens.loc[~bad_ix.values]
sample_promoter_kmers = promoter_kmers.loc[sample_promoters.index.values]
sample_enhancer_kmers = enhancer_kmers.loc[sample_ens.kmer_format.values]
bad = sample_promoter_kmers.isna().sum(axis=1)> 0
sample_promoter_kmers = sample_promoter_kmers.loc[~bad.values]
sample_enhancer_kmers = sample_enhancer_kmers.loc[~bad.values]
x, y = cca.transform(sample_enhancer_kmers.values, sample_promoter_kmers.values)
cca1_r.append(spearmanr(x[:, 0], y[:, 0])[0])
cca2_r.append(spearmanr(x[:, 1], y[:, 1])[0])
cca1_r
cca2_r
np.mean(cca1_r)
np.mean(cca2_r)
# # validate model on correlated from atac
gene_atac = []
enh_atac = []
for i in atac:
tmp = i.split(':')
gene_atac.append(tmp[0])
enh_atac.append(tmp[1])
atac_enhs = enhs_list.loc[np.array(enh_atac).astype(int)]
atac_enhs['paired_gene'] = gene_atac
enhancer_kmers.head()
atac_enhs.head()
use_enhs_atac = enhancer_kmers.loc[atac_enhs.kmer_format.values]
use_enhs_atac['paired_gene'] = atac_enhs['paired_gene'].values
use_promoter_atac = []
missing_genes = []
for i in use_enhs_atac.paired_gene:
if i in promoter_kmers.index.tolist():
use_promoter_atac.append(promoter_kmers.loc[i])
else:
missing_genes.append(i)
len(missing_genes)
use_enhs_atac = use_enhs_atac.loc[~(use_enhs_atac.paired_gene.isin(missing_genes))]
use_promoter_atac= np.array(use_promoter_atac)
use_promoter_atac.shape
atac_enh, atac_prom = cca.fit_transform(use_enhs_atac.drop('paired_gene', axis=1).values, use_promoter_atac)
spearmanr(atac_enh[:, 1], atac_prom[:, 1])
plt.scatter(atac_enh[:, 1], atac_prom[:, 1], s=1)
plt.xlabel('Enhancer kmers')
plt.ylabel('Promoter kmers')
plt.title('Second cannonical component')
plt.scatter(atac_enh[:, 0], atac_prom[:, 0], s=1)
plt.xlabel('Enhancer kmers')
plt.ylabel('Promoter kmers')
plt.title('First cannonical component')
spearmanr(atac_enh[:, 0], atac_prom[:, 0])
atac_enhs.paired_gene.unique().shape
np.sum(pd.Series(use_enhs.paired_gene.unique()).isin(atac_enhs.paired_gene.unique()))
use_enh_array = use_enhs_atac.drop('paired_gene', axis=1).values
kmer_names = use_enhs_atac.columns.to_list()[:-1]
promoter_loadings = []
enhancer_loadings = []
for i in range(use_enh_array.shape[1]):
enhancer_loadings.append(spearmanr(use_enh_array[:, i], atac_enh[:, 0])[0])
promoter_loadings.append(spearmanr(use_promoter_atac[:, i], atac_prom[:, 0])[0])
plt.scatter(enhancer_loadings, promoter_loadings)
spearmanr(enhancer_loadings, promoter_loadings)
| .ipynb_checkpoints/ethan_sequence_cca_concat_kmers-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# # Spectral Analysis of Deterministic Signals
#
# *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [<EMAIL>](mailto:<EMAIL>).*
# -
# ## Summary
#
# * The discrete Fourier transform (DFT) of a complex exponential signal with arbitrary frequency does not only consist of a contribution at this frequency. Except for the case that the frequency is a DFT eigenfrequency, additional contributions are inevitably present throughout the entire spectrum. The maximum of the magnitude spectrum (main lobe) is located approximately at the frequency of the exponential signal. The level of the additional contributions (side lobes) decay typically with increasing distance in frequency from the maximum. This effect is known as **leakage effect** or spectral leakage. The leakage effect is a consequence of considering only a finite number of samples in the DFT.
#
# * Windowing refers to weighting the samples before performing the DFT. The amount of spectral leakage can be controlled by the window function. The rectangular window (equal weighting of all samples) results in the narrowest main lobe of all possible windows. Other window functions show wider main lobes at the benefit of lower or faster decaying side lobes. The choice of a particular window function is, amongst others, a trade-off between width of the main lobe and side lobe level.
#
# * The leakage effect limits the spectral resolution of the DFT. The width of the main lobe of a particular window is linked to its capability to detect two exponential signals with comparable levels and similar frequencies, the decay of the side lobes to detect two exponential signals with disparate levels and dissimilar frequencies. The choice of a window function is a trade-off between high resolution and high dynamic range. It is in general application specific.
#
# * Zero-padding a signal before computing its DFT is equivalent to a bandlimited interpolation in the frequency domain.
#
# * For the short-time Fourier transform (STFT), a signal is split into short segments for which the DFT is subsequently computed. The magnitude of the STFT is known as spectrogram. In spectral analysis, the STFT provides insights into the temporal evolution of the spectral contents of a signal. It is of special interest for signals which change their spectral properties over time (non-stationary signals). The leakage effect applies to each subsequent DFT.
# **Example**
#
# The following example shows the impact of two different window functions on the magnitude spectrum of a complex exponential signal. The red line indicates the frequency $\Omega_0$ of the exponential signal. The leakage effect is clearly visible in both cases. However, the width of the main lobe and the decay of the side lobes is different for the different window functions.
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
N = 32 # length of the DFT
P = 10.3 # period of the exponential signal
w1 = np.ones(N) # first window function
w2 = np.hanning(N) # second window function
def plot_spectrum(X, title):
plt.axvline(x=P, linewidth=2, color='r', alpha=.5)
plt.stem(mu, 20*np.log10(np.abs(X)), basefmt = ' ', bottom=-300, use_line_collection=True)
plt.title(title)
plt.xlabel(r'$\mu$')
plt.axis([0, N, -100, 40])
plt.grid()
# generate signal
k = np.arange(N)
Om0 = P*(2*np.pi/N) # frequency of exponential signal
x = np.exp(1j*Om0*k)
# DFTs of the windowed signals
mu = np.arange(N)
X1 = np.fft.fft(x * w1)
X2 = np.fft.fft(x * w2)
# plot spectra
plt.figure(figsize = (10, 8))
ax1 = plt.subplot(2, 2, 1)
plot_spectrum(X1, 'rectangular window')
plt.ylabel(r'$|X[\mu]|$ in dB')
ax2 = plt.subplot(2, 2, 2, sharey=ax1)
plot_spectrum(X2, 'Hanning window')
plt.setp(ax2.get_yticklabels(), visible=False)
plt.tight_layout()
# + [markdown] nbsphinx="hidden"
# **Copyright**
#
# This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *<NAME>, Digital Signal Processing - Lecture notes featuring computational examples*.
| spectral_analysis_deterministic_signals/summary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
# os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import pandas as pd
import matplotlib.pyplot as plt
from mlxtend.frequent_patterns import apriori, association_rules
import numpy as np
import time
# -
status = pd.read_csv("./bikeshare/status.csv")
status
status_c = status.copy()
status_c['city_zip'] = ''
status_c.loc[:, 'city_zip'] = status_c.loc[:, 'station_id'].apply(lambda x: city2zip[id2city[x]])
# station.loc[station['id'] == x, 'city'].values[0]
status_c
status.station_id.unique()
station = pd.read_csv("./bikeshare/station.csv")
station
city2zip = {'San Francisco':94107, 'Redwood City':94063, 'Palo Alto':94301, 'Mountain View':94041, 'San Jose':95113}
id2city = dict()
for i in range(station.shape[0]):
id2city[station.loc[i, 'id']] = station.loc[i, 'city']
id2city
trip = pd.read_csv("./bikeshare/trip.csv")
trip
trip.zip_code.unique()
weather = pd.read_csv("./bikeshare/weather.csv")
weather.date = weather.date.apply(lambda x: x.split("/")[2] + "-" + x.split("/")[0] + "-" + x.split("/")[1])
weather.date = pd.to_datetime(weather.date).dt.strftime('%Y/%m/%d')
weather
weather.zip_code.unique()
weather.events.unique()
weather.events = weather.events.apply(lambda x: 0 if type(x) == np.float else 1) # nan <-- np.float
weather.fillna(method='ffill', inplace=True)
weather
weather.isna().any()
trips = pd.DataFrame(data=0, columns=status.station_id.unique(), index=range(trip.shape[0]))
trips
start = time.time()
for i in range(trip.shape[0]):
s = trip.loc[i,'start_station_id']
e = trip.loc[i,'end_station_id']
trips.loc[i, s]=1
trips.loc[i, e]=1
end = time.time()
print(f'done, {str(end-start)} sec')
trips
start = time.time()
frequent_itemsets = apriori(trips, min_support=0.001, use_colnames=True, max_len=2, low_memory=True)
end = time.time()
print(f'done, {end-start} sec')
frequent_itemsets
rules = association_rules(frequent_itemsets, metric='confidence', min_threshold=0.15)
rules
rules = rules[rules['lift'] >= 1]
rules = rules.sort_values(by=['support', 'confidence', 'lift'], ascending=False)
rules
# len(rules.antecedents.unique()) # 37
len(rules.consequents.unique()) # 21
rules.to_csv("./bikeshare/trip_association_rules.csv")
status.loc[0, 'time']
s2 = status_c.loc[status_c['station_id']==2, :]
s2
# 注意時間的格式(避免無法merge)
# The day of the week with Monday=0, Sunday=6.
s2['dayofweek'] = pd.to_datetime(s2.time, format='%Y-%m-%d %H:%M:%S').dt.dayofweek
s2['hour'] = pd.to_datetime(s2.time, format='%Y-%m-%d %H:%M:%S').dt.hour
s2['time'] = pd.to_datetime(s2['time'].str[:10]).dt.strftime('%Y/%m/%d')
# s2['time'] = s2['time'].str[:10]
# s2 = s2.drop('time', axis=1)
s2
weather[weather['zip_code'] == 95113]
pd.merge(s2, weather[weather['zip_code'] == 95113], left_on=['time', 'city_zip'], right_on=['date', 'zip_code'])
s2 = s2.groupby(['time', 'hour']).mean()
s2.reset_index(inplace=True, level=['time', 'hour'])
s2
s2_X = pd.merge(s2, weather[weather['zip_code'] == 95113], left_on=['time', 'city_zip'], right_on=['date', 'zip_code'])
s2_X
s2_X.isna().any()
s2_X.loc[s2_X.max_gust_speed_mph.isna(), :].time.unique()
s2.bikes_available[:48].plot()
| bikeshare_AssociationRule.ipynb |