code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
class Int:
def __init__(self,val):
self.val = val
def __repr__(self):
return("Int:%d" % self.val)
def __add__(self, other):
return(self.val + other.val)
num1 = Int(88)
num2 = Int(77)
num2.val = 68
num1 + num2
|
misc/magic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="up8pg2ruMCdx" colab_type="text"
# **Download do dataset original:**
# https://www.kaggle.com/luisfredgs/imdb-ptbr/kernels
#
# **Dados usados nesse tutorial:**
# https://drive.google.com/drive/folders/1hwhqN-CUIGGZZXpVnka08yaItZsMQapx?usp=sharing
#
#
#
# + [markdown] id="97xbtp5gsCSp" colab_type="text"
# # Processamento e análise dos dados
#
# + id="mh02JyTzl5Ee" colab_type="code" colab={}
# Preparando ambiente (importando bibliotecas e downloads...)
# !pip install nltk
import nltk
nltk.download('rslp')
nltk.download('stopwords')
nltk.download('punkt')
import re
import pandas as pd
# + id="PVekMU8El5Ek" colab_type="code" colab={}
# Importando dataset
df = pd.read_csv("dataset.csv", sep="," , encoding="utf8")
# + id="HMdvOHaEl5En" colab_type="code" outputId="4f4ee7ce-c138-4a91-e590-6c86dce10300" executionInfo={"status": "ok", "timestamp": 1561491232939, "user_tz": 180, "elapsed": 3786, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-n9tJsaNisS0/AAAAAAAAAAI/AAAAAAAAJMg/nqhIYX_GgmI/s64/photo.jpg", "userId": "07895704557374551617"}} colab={"base_uri": "https://localhost:8080/", "height": 138}
# Separacao dos dados por sentimento
df.groupby('sentiment').count()
# + [markdown] id="UHxwzjayl5Er" colab_type="text"
# ### Removendo e tratando dados
# + id="jnEgbdaWl5Er" colab_type="code" outputId="f0b10cc4-dd53-49da-c3b1-43f5fed6522e" executionInfo={"status": "ok", "timestamp": 1561491232940, "user_tz": 180, "elapsed": 3731, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-n9tJsaNisS0/AAAAAAAAAAI/AAAAAAAAJMg/nqhIYX_GgmI/s64/photo.jpg", "userId": "07895704557374551617"}} colab={"base_uri": "https://localhost:8080/", "height": 198}
# Remove columns e create column
df.drop(columns=['id', 'text_en'], axis=1, inplace=True)
df['classification'] = df["sentiment"].replace(["neg", "pos"],[0, 1])
# Texto para minusculo
text_lower = [t.lower() for t in df['text_pt']]
df['text_pt'] = text_lower
df.head(5)
# + id="vAvksLTVl5Eu" colab_type="code" colab={}
# funcao para remover brackets
def remove_brackets(column):
for x in range(1,len(column)):
return(re.sub('[\[\]]','',repr(column)))
# + [markdown] id="ClSYo6eUNmhn" colab_type="text"
# ### Stemmer e stopwords
# + id="VMbuTN79y20S" colab_type="code" outputId="82003e83-21bb-41e6-cec7-bb5727f0eb47" executionInfo={"status": "ok", "timestamp": 1561491697602, "user_tz": 180, "elapsed": 236, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-n9tJsaNisS0/AAAAAAAAAAI/AAAAAAAAJMg/nqhIYX_GgmI/s64/photo.jpg", "userId": "07895704557374551617"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
# %%time
from nltk.tokenize import word_tokenize
stop_words = nltk.corpus.stopwords.words('portuguese')
stemmer = nltk.stem.RSLPStemmer()
# Trabalhar com stemmer e stopwords da base de treinamento/teste
for x in range(0,len(df['text_pt'])):
# Remover as stop words do texto
word_tokens = word_tokenize(df['text_pt'][x])
filtered_sentence = [w for w in word_tokens if not w in stop_words]
# Remover sufixos
line=[]
text_tokenized = word_tokenize((remove_brackets(filtered_sentence)))
line = [stemmer.stem(word) for word in text_tokenized]
df['text_pt'][x] = (remove_brackets(line))
# + id="7MlAp5DLl5E4" colab_type="code" colab={}
from sklearn.feature_extraction.text import CountVectorizer
from nltk.tokenize import RegexpTokenizer
# Regex para remover alguns valores do dataset (simbolos, numeros...)
token = RegexpTokenizer(r'[a-zA-Z0-9]+')
# Cria o 'vetorizador' de acordo com os parametros abaixo
cv = CountVectorizer(lowercase=True,stop_words=None,ngram_range = (1,2),
tokenizer = token.tokenize)
# Matrixsparse da representação da coluna text_pt
text_counts= cv.fit_transform(df['text_pt'])
# + id="S12zAzVjl5E7" colab_type="code" colab={}
# Vocabulario
cv.vocabulary_
# + [markdown] id="BnWBnQdGl5E_" colab_type="text"
# # Testando o modelo
# + id="xCHguDjRl5E_" colab_type="code" outputId="3fd0f6e6-2f39-4e5c-ac0a-df1d287a4f1b" executionInfo={"status": "ok", "timestamp": 1561492095087, "user_tz": 180, "elapsed": 550, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-n9tJsaNisS0/AAAAAAAAAAI/AAAAAAAAJMg/nqhIYX_GgmI/s64/photo.jpg", "userId": "07895704557374551617"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
# Importando biliotecas para selecao de amostra, modelo e avaliação do modelo.
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
# Divindo no dataset em treino e teste
X_train, X_test, y_train, y_test = train_test_split(text_counts, df['classification'],
test_size=0.34, random_state=1,
shuffle=True)
# Criar modelo e treinar
clf = MultinomialNB().fit(X_train, y_train)
# Fazendo predict do valor de X para teste de acuracidade
y_predicted= clf.predict(X_test)
print("MultinomialNB Accuracy:",metrics.accuracy_score(y_test, y_predicted).round(3))
# + [markdown] id="0KRxLCVBl5FC" colab_type="text"
# ### Recebendo dados do texto (txt) para gerar grafíco
# + id="UXdQmHlVl5FJ" colab_type="code" colab={}
# Separa por paragrafos
with open('./texto_teste.txt', 'r') as file_teste:
paragraph = file_teste.read().split('\n\n')
# Separa por frases
with open('./texto_teste.txt', 'r') as file_teste:
phrase = file_teste.read().split('.')
# + [markdown] id="QOCouvtMyIeU" colab_type="text"
# ### Fazer stemmer e stopwords do conteúdo do texto
# + id="BB76UhdYl5FM" colab_type="code" outputId="8d35d9d5-6e40-4e07-a758-35159e39c628" executionInfo={"status": "ok", "timestamp": 1561491786135, "user_tz": 180, "elapsed": 535, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-n9tJsaNisS0/AAAAAAAAAAI/AAAAAAAAJMg/nqhIYX_GgmI/s64/photo.jpg", "userId": "07895704557374551617"}} colab={"base_uri": "https://localhost:8080/", "height": 54}
#Importar stemmer novamente
stemmer = nltk.stem.RSLPStemmer()
# Criar dataframe
df_result = pd.DataFrame()
# Fazer a tokanização, remocao de stop words e
# transformar os dados para predict
neg,pos=0,0
for x in range(0,len(phrase)-1):
# Texto tokenizado
text_tokenized = word_tokenize(phrase[x])
# Remove stop words do texto
filtered_sentence = [w for w in text_tokenized if not w in stop_words]
# Cria stemmer do texto input
line = [stemmer.stem(word) for word in filtered_sentence]
line = (remove_brackets(line))
# Criar prediction para cada frase
value_trans = cv.transform([line])
predict_phrase = clf.predict(value_trans)
# Contar por tipo de prediction (positivo e negativo)
if predict_phrase==0:pos+=1
else:neg+=1
# Salvar valores no dataframe
df_result['positive'] = [pos]
df_result['negative'] = [neg]
# + [markdown] id="Hsr4I7bel5FS" colab_type="text"
# # Criando gráfico de análise de sentimento
# + id="j4xPqWCOl5FU" colab_type="code" outputId="fd635758-5f81-40b8-dd09-e13b93223552" executionInfo={"status": "ok", "timestamp": 1561494125525, "user_tz": 180, "elapsed": 715, "user": {"displayName": "<NAME>", "photoUrl": "https://lh6.googleusercontent.com/-n9tJsaNisS0/AAAAAAAAAAI/AAAAAAAAJMg/nqhIYX_GgmI/s64/photo.jpg", "userId": "07895704557374551617"}} colab={"base_uri": "https://localhost:8080/", "height": 397}
def generate_piechart(df_result):
import matplotlib.pyplot as plt
labels = df_result.columns.tolist()
sizes = df_result.values.tolist()[0]
color = ['lightskyblue', 'lightcoral']
explode = (0.15, 0)
fig1, ax1 = plt.subplots(figsize=(5,5))
ax1.pie(sizes, labels=labels, explode=explode,
shadow=True, autopct='%1.1f%%', startangle=140, colors=color)
ax1.set_title('Sentiment Analysis by phrases - NLTK', fontsize=15)
ax1.axis('equal')
plt.show()
print("Quantity by paragraph: {}".format(len(paragraph)))
print("Quantity by phrases: {}".format(len(phrase)-1))
print("Quantity by positives phrases: {}".format(df_result['positive']
.values.tolist()[0]))
print("Quantity by negatives phrases: {}".format(df_result['negative']
.values.tolist()[0]))
# Gerar gráfico
generate_piechart(df_result)
|
NPL.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img width=150 src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1a/NumPy_logo.svg/200px-NumPy_logo.svg.png"></img>
# # Part.2-1-02 NumPy 陣列進階操作
# # 【基礎02】
# * NumPy 提供了一個同類型元素的多維容器型態,稱為是數組或陣列。陣列的全名是 N-dimensional Array,習慣簡寫為 NdArray 或 Array。型態:
# <img src="https://github.com/sueshow/Data_Science_Marathon/blob/main/picture/array_type01.png"></img>
# 除了更多樣的型態之外,每一種型態也增加了範圍大小的彈性。
# <img src="https://github.com/sueshow/Data_Science_Marathon/blob/main/picture/array_type02.png"></img>
import numpy as np
print(np.__version__)
a = np.arange(15).reshape(3, 5)
print(a.dtype)
print(a.itemsize)
# int32 代表的是 32 個位元長度的 int,每個元素佔用 4 Bytes
# * 在 NumPy 中有幾種表示型態的方法:
# * 'int':字串
# * 'int64':字串
# * np.int64:物件
# * np.dtype('float64'):物件
# <br>
#
# 實際上,NumPy 型態的定義是一個 np.dtype 的物件,包含「型態名稱」、「表示範圍」及「儲存空間」等等的資訊。
# * is 是強比較,必須要所有規格都相同,包含是哪一種物件,要跟 np.dtype 物件來相比
# * == 的比較只會考慮自定義的規則,例如只需要物件的表現形式,可以接受的字串或物件的形式
print(a.dtype == 'int32')
print(a.dtype is 'int32')
print(a.dtype is np.int32)
print(a.dtype is np.dtype('int32'))
# # 【進階02】
# ## 1. NumPy 陣列重塑
# ### 1.1 `flatten()` 與 `ravel()`
#
# * 相同處:透過 `flatten()` 與 `ravel()` 均可將多維陣列轉形為一維陣列,使用透過下列兩種方法得到的結果都是完全一樣:
# <br>
#
# |np.函式|陣列物件.函式|
# |---|---|
# |np.flatten(a, order='C')|a.flatten(order='C')|
# |np.ravel(a, order='C')|a.ravel(order='C')|
a = np.array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
a
a.flatten()
# * 不同處:`ravel()` 建立的是原來陣列的 view,在 `ravel()` 回傳物件中做的元素值變更,「將會影響原陣列的元素值」。
b = a.ravel()
b
# 如果我們改變 b 陣列的元素值,原陣列 a 對應的元素值也會被改變。
b[3] = 100
b
a
# * `flatten()` 與 `ravel()` 引數 order 預設值為 C,常用的引數值有 C 和 F。C 的意義是 C-style,展開時是以 row 為主的順序展開;而 F 是 Fortran-style,展開時是以 column 為主的順序展開。
a.ravel(order='C')
a.ravel(order='F')
# ### 1.2 `reshape()`
#
# * `reshape()` :可透過 `np.reshape(a, new_shape)` 或 `a.reshape(new_shape, refcheck=True)` 來執行。
a = np.arange(15)
a
a.reshape((3, 5))
# * 如果新的總數與原先 shape 總數不一致的話,則會產生錯誤。
a.size
a.reshape((3, 6))
# * Reshape 時,新的形狀可以採用模糊指定為 -1,讓 NumPy 自動計算。
a.reshape((5, -1))
# * 若 reshape 後的陣列元素值改變,「將會影響原陣列對應的元素值也跟著改變」。
b = a.reshape((3, 5))
b[0, 2] = 100
b
# a[2] 值被改變了。
a
# ### 1.3 `resize()`
#
# * 與 `reshape()`相同處:
# * `resize()` 可透過 `np.resize(a, new_shape)` 或 `a.resize(new_shape, refcheck=True)` 來執行。要改變被 reference 的陣列時有可能會產生錯誤,這時候可以將 `refcheck` 引數設為 `False` (預設為 `True`)。
b = np.arange(15)
b
# * 與 `reshape()` 不同處:
# * 如果 resize 的大小超過總元素值,則會在後面的元素值的指定為 0。
b.size
b.resize((3, 6), refcheck=False)
b
# * 重要:如果 resize 的大小小於總元素值,則會依照 C-style 的順序,取得 resize 後的陣列元素。
b.resize(3, refcheck=False)
b
# ## 2. 軸 (axis) 與維度 (dimension)
# * 軸 (axis) 的數目也就是 NumPy 陣列的維度 (dimension) 數,軸的順序編號從 0 開始,下面例子用圖示來解說。
# ### 2.1 一維陣列的軸
#
# * 一維陣列:只有一個軸, axis 為 0。
a = np.array([1, 2, 3])
a
# 以圖示來說明一維陣列的軸。
#
# 
# ### 2.2 二維陣列的軸
#
# * 二維陣列: ndim 為 2,軸 0 就是沿著 row 的軸,而軸 1 是沿著 column 的軸。
a = np.arange(6).reshape(3, 2)
a
# 以圖示來說明二維陣列的軸。
#
# 
# ### 2.3 三維陣列的軸
#
# * 三維陣列:有 3 個軸,可理解軸的順序是"由外而內"、"由row而column"。
# 以前一天範例程式中三維陣列的例子來看,可以理解為 2 個 4 $\times$ 3 的二維陣列排在一起。
a = np.array([[[1, 2, 3], [4, 5, 6],
[7, 8, 9], [10, 11, 12]],
[[1, 2, 3], [4, 5, 6],
[7, 8, 9], [10, 11, 12]]])
a
# 以圖示來說明三維陣列的軸。
#
# 
# 從 `shape` 屬性來看也可以協助理解在多維陣列中的軸。
a.shape
# 若我們要沿軸對元素做加總,呼叫 `sum()` 函式並指定 axis。
a.sum(axis=0)
# ### 2.4 `np.newaxis` 增加軸數
#
# * 與 `reshape()` 類似的應用。
# * 若要增加軸數的話,可使用 `np.newaxis` 物件。將 `np.newaxis` 加到要增加的軸的位置即可。
# * 與 `reshape()` 不同:`np.newaxis` 新增的維度為 1,而 `reshape()` 可以指定要改變的形狀 (不一定為 1)。
a = np.arange(12).reshape(2, 6)
a
a[:,np.newaxis,:].shape
# ## 3. NumPy 陣列的合併與分割
a = np.arange(10).reshape(5, 2)
b = np.arange(6).reshape(3, 2)
a
b
# ### 3.1 合併:`concatenate()`, `stack()`, `hstack()`, `vstack()`
# * 重要:使用 `concatenate()` 進行陣列的合併時,須留意除了指定的軸之外 (預設為 axis 0),其他軸的形狀必須完全相同,合併才不會發生錯誤。
#
# ```python
# numpy.concatenate((a1, a2, ...), axis=0, out=None)
# ```
np.concatenate((a, b))
# * 不同處:
# * `stack()` 回傳的陣列維度會是合併前的維度 +1。
# * `hstack()` 與 `vstack()` 回傳的陣列維度則是依合併的陣列而定。
# * 合併原則:
# * `stack()` 必須要所有陣列的形狀都一樣
# * `hstack()` 與 `vstack()` 則跟上述的規則一樣,除了指定的軸之外,其他軸的形狀必須完全相同才可以合併。
#
# |函式|說明|
# |---|---|
# |numpy.stack(arrays, axis=0, out=None)|根據指定的軸進行合併|
# |numpy.hstack(tup)|根據水平軸進行合併|
# |numpy.vstack(tup)|根據垂直軸進行合併|
# stack() 範例
c = np.arange(10).reshape(5, 2)
np.stack((a, c), axis=1)
# hstack() 範例
np.hstack((a, c))
# vstack() 範例
np.vstack((a, b))
# ### 3.2 分割:`split()`、`hsplit()`、`vsplit()`
a = np.arange(10).reshape(5, 2)
a
# * `split()` 的語法:
#
# ```python
# numpy.split(array, indices_or_sections, axis=0)
# ```
#
# * indices_or_sections
# * 給定單一整數的話,那就會按照軸把陣列等分
# * 給定一個 List 的整數值的話,就會按照區段去分割
# * 範例:
# * `indices_or_sections=[2, 3]` 依照下列方式做分割 (一樣是照按照軸把陣列分割)
# ```
# ary[0:2]
# ary[2:3]
# ary[3:n]
# ```
#
# * 與 `split` 很類似的是 `hsplit` 與 `vsplit`,分別是依照水平軸和垂直軸去做分割。
# 依 axis 0 等分 split
np.split(a, 5)
# split 為 (2,2), (1,2), (2,2) 三個陣列,並回傳含 3 個陣列的 List
np.split(a, [2,3])
b = np.arange(30).reshape(5, 6)
b
# 依水平軸去做等分分割
np.hsplit(b, 3)
# 依水平軸照區段去分割
np.hsplit(b, [2, 3, 5])
# 依垂直軸按照區段去分割,超出的區段則傳回空陣列
np.vsplit(b, [2, 4, 6])
# ## 4. 迭代
# * 一維陣列的迭代:跟 Python 集合型別 (例如 List) 的迭代相同。
a = np.arange(5)
a
for i in a:
print(i)
# * 多維陣列的迭代:以 axis 0 為準,列出各 row 的元素。
b = np.arange(6).reshape(2, 3)
b
for row in b:
print(row)
# * 如果要列出多維陣列所有元素的話,可以配合 `flat` 屬性。
for i in b.flat:
print(i)
# ## 5. 搜尋與排序
# ### 5.1 顯示最大值和最小值:`amax()`、`amin()`、`max()`、`min()`
#
# * 陣列元素最大值和最小值:可透過 `amax()`、`amin()`、`max()`、`min()`,也可依照軸列出各軸的最大/最小元素值。
#
# * 基本語法:
#
# |np.函式|陣列物件.函式|
# |---|---|
# |numpy.amax(array, axis=None, keepdims=<no value>)|ndarray.max(axis=None, keepdims=False)|
# |numpy.amin(array, axis=None, keepdims=<no value>)|ndarray.min(axis=None, keepdims=False)|
a = np.random.randint(1, 20, 10)
a
# 陣列中最大的元素值
np.amax(a)
# 陣列中最小的元素值
np.amin(a)
# * 多維陣列:用法相同可依照軸列出最大或最小值。
b = a.reshape(2, 5)
b
# 若設定 keepdims=True,結果會保留原陣列的維度來顯示。
np.amax(b, keepdims=True)
# 列出各 row 最大值
b.max(axis=1)
# 同樣的 amax 也可以依軸列出各 row 最大值
np.amax(b, axis=1)
# 列出各 column 最小值
b.min(axis=0)
# ### 5.2 顯示最大值和最小值的索引:`argmax()` 與 `argmin()`
#
# * 與上述不同處:`argmax` / `argmin` 回傳的是最大值和最小值的索引,也可依軸找出各軸最大值和最小值的索引。
#
# 基本語法:
#
# |np.函式|陣列物件.函式|
# |---|---|
# |numpy.argmax(array, axis=None)|ndarray.argmax(axis=None)|
# |numpy.argmin(array, axis=None)|ndarray.argmin(axis=None)|
np.random.seed(0)
a = np.random.randint(1, 20, size=(3, 4))
a
# 若沒有指定軸的話,`argmax()` 與 `argmin()` 會回傳多維陣列展平後的索引。
np.argmax(a)
# 列出各 column 的最大值索引, 分別為 [0, 0, 2, 1]
np.argmax(a, axis=0)
# 元素值 1 為最小值,展平後的索引值為 2。
a.argmin()
# ### 5.3 找出符合條件的元素:`where`
#
# * 語法:
# ```python
# numpy.where(condition[, x, y])
# ```
a
# 傳入條件式,回傳值為符合條件的元素索引,不過這邊留意的是,以下面二維陣列為例,回傳的索引陣列要合併一起看,也就是說
# ```
# (array([0, 0, 1, 2]),
# array([0, 1, 3, 2]))
# ```
#
# a[0, 0] 值為 13<br />
# a[0, 1] 值為 16<br />
# a[1, 3] 值為 19<br />
# a[2, 2] 值為 13
#
# 以上索引值對應的元素,其值都符合 "大於 10" 的條件。
np.where(a > 10)
# 若是設定 x, y 引數的話,可將各元素取代掉。以下面的例子來解釋,如果元素值大於 10 的話就用 "Y" 來替代,反之則是 "N"。
np.where(a > 10, "Y", "N")
# ### 5.4 `nonzero`
#
# * `nonzero` 等同於 `np.where(array != 0)` 的語法,同樣的也是回傳符合非 0 條件的元素索引值。
#
# * 語法:
#
# |np.函式|陣列物件.函式|
# |---|---|
# |numpy.nonzero(array)|ndarray.nonzero()|
np.random.seed(2)
a = np.random.randint(0, 5, 10)
a
np.nonzero(a)
a.nonzero()
# ### 5.5 排序:`sort()` 與 `argsort()`
#
# * 差異處:
# * `sort()` 回傳的是排序後的陣列
# * `argsort()` 回傳的是排序後的陣列索引值
# * 語法:
#
# |np.函式|陣列物件.函式|
# |---|---|
# |numpy.sort(a, axis=-1, kind=None, order=None)|ndarray.sort()|
# |numpy.argsort(a, axis=-1, kind=None, order=None)|ndarray.argsort()|
np.random.seed(3)
a = np.random.randint(0, 20, 10)
a
np.sort(a)
a.argsort()
# * 與 `np.sort()` 不同處:`陣列物件.sort()` 的語法會進行 in-place 排序,也就是「原本的陣列內容會跟著改變」。
a.sort()
a
# * 多維陣列在排序時可以指定要依據的軸。
b = np.random.randint(0, 20, size=(5, 4))
b
np.sort(b, axis=0)
# * 排序支援多種不同的排序算法,包括 quicksort (預設)、heapsort、mergesort、timesort,在 `kind` 引數指定即可。依照官網文件指出排序速度是以 quicksort 最快,mergesort / timesort 其次,之後是 heapsort。
c = np.random.randint(0, 100000000, 1000000)
np.sort(c, kind='heapsort')
|
Sample/Day_02_Sample.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + nbsphinx="hidden"
import open3d as o3d
import numpy as np
import os
import sys
# monkey patches visualization and provides helpers to load geometries
sys.path.append('..')
import open3d_tutorial as o3dtut
# change to True if you want to interact with the visualization windows
o3dtut.interactive = not "CI" in os.environ
# -
# # KDTree
# Open3D uses [FLANN](https://www.cs.ubc.ca/research/flann/) to build KDTrees for fast retrieval of nearest neighbors.
# ## Build KDTree from point cloud
# The code below reads a point cloud and builds a KDTree. This is a preprocessing step for the following nearest neighbor queries.
print("Testing kdtree in Open3D...")
print("Load a point cloud and paint it gray.")
pcd = o3d.io.read_point_cloud("../../test_data/Feature/cloud_bin_0.pcd")
pcd.paint_uniform_color([0.5, 0.5, 0.5])
pcd_tree = o3d.geometry.KDTreeFlann(pcd)
# ## Find neighboring points
# We pick the 1500th point as the anchor point and paint it red.
print("Paint the 1500th point red.")
pcd.colors[1500] = [1, 0, 0]
# ### Using search_knn_vector_3d
# The function `search_knn_vector_3d` returns a list of indices of the k nearest neighbors of the anchor point. These neighboring points are painted with blue color. Note that we convert `pcd.colors` to a numpy array to make batch access to the point colors, and broadcast a blue color [0, 0, 1] to all the selected points. We skip the first index since it is the anchor point itself.
print("Find its 200 nearest neighbors, and paint them blue.")
[k, idx, _] = pcd_tree.search_knn_vector_3d(pcd.points[1500], 200)
np.asarray(pcd.colors)[idx[1:], :] = [0, 0, 1]
# ### Using search_radius_vector_3d
# Similarly, we can use `search_radius_vector_3d` to query all points with distances to the anchor point less than a given radius. We paint these points with a green color.
print("Find its neighbors with distance less than 0.2, and paint them green.")
[k, idx, _] = pcd_tree.search_radius_vector_3d(pcd.points[1500], 0.2)
np.asarray(pcd.colors)[idx[1:], :] = [0, 1, 0]
print("Visualize the point cloud.")
o3d.visualization.draw_geometries([pcd],
zoom=0.5599,
front=[-0.4958, 0.8229, 0.2773],
lookat=[2.1126, 1.0163, -1.8543],
up=[0.1007, -0.2626, 0.9596])
# <div class="alert alert-info">
#
# **Note:**
#
# Besides the KNN search `search_knn_vector_3d` and the RNN search `search_radius_vector_3d`, Open3D provides a hybrid search function `search_hybrid_vector_3d`. It returns at most k nearest neighbors that have distances to the anchor point less than a given radius. This function combines the criteria of KNN search and RNN search. It is known as RKNN search in some literatures. It has performance benefits in many practical cases, and is heavily used in a number of Open3D functions.
#
# </div>
|
examples/python/geometry/kdtree.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn import datasets
df=datasets.load_iris()
dir(df)
features=df.data
label=df.target
from sklearn.model_selection import train_test_split
train_feature,test_feature,train_label,test_label=train_test_split(features,label,test_size=.1)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
clf=KNeighborsClassifier(n_neighbors=9)
clfd=DecisionTreeClassifier()
train=clf.fit(train_feature,train_label)
trainde=clfd.fit(train_feature,train_label)
predicted=train.predict(test_feature)
predictedde=trainde.predict(test_feature)
predicted
predictedde
from sklearn.metrics import accuracy_score
knn_acc=accuracy_score(test_label,predicted)
print('knn_acc - ',knn_acc)
Deci_acc=accuracy_score(test_label,predictedde)
print('Deci_acc - ',Deci_acc)
# # Format of approching any data and machine learning
# # Data -->> Missing Value Removes -->> Train, Test Data -->> Classifier call -->> Data in CLF -->> Predict -->> Accuracy Score
|
KNN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Import packages
#
# +
# system tools
import os
import sys
sys.path.append(os.path.join(".."))
# data munging tools
import pandas as pd
import utils.classifier_utils as clf
# Machine learning stuff
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import ShuffleSplit
from sklearn import metrics
# Visualisation
import matplotlib.pyplot as plt
import seaborn as sns
# -
# ## Reading in the data
# +
filename = os.path.join("..", "data", "labelled_data", "fake_or_real_news.csv")
DATA = pd.read_csv(filename, index_col=0)
# -
# __Inspect data__
DATA.sample(10)
DATA.shape
# <br>
# Q: How many examples of each label do we have?
DATA["label"].value_counts()
# ## Create balanced data
# We can use the function ```balance``` to create a more even dataset.
DATA_balanced = clf.balance(DATA, 1000)
DATA_balanced.shape
# <br>
#
# What do the label counts look like now?
DATA_balanced["label"].value_counts()
# <br>
#
# Let's now create new variables called ```texts``` and ```lables```, taking the data out of the dataframe so that we can mess around with them.
texts = DATA_balanced["text"]
labels = DATA_balanced["label"]
# # Train-test split
# I've included most of the 'hard work' for you here already, because these are long cells which might be easy to mess up while live-coding.
#
# Instead, we'll discuss what's happening. If you have questions, don't be shy!
X_train, X_test, y_train, y_test = train_test_split(texts, # texts for the model
labels, # classification labels
test_size=0.2, # create an 80/20 split
random_state=42) # random state for reproducibility
# # Vectorizing and Feature Extraction
# Vectorization. What is it and why are all the cool kids talking about it?
#
# Essentially, vectorization is the process whereby textual or visual data is 'transformed' into some kind of numerical representation. One of the easiest ways to do this is to simple count how often individual features appear in a document.
#
# Take the following text:
# <br><br>
# <i>My father’s family name being Pirrip, and my Christian name Philip, my infant tongue could make of both names nothing longer or more explicit than Pip. So, I called myself Pip, and came to be called Pip.</i>
# <br>
#
# We can convert this into the following vector
#
# | and | be | being | both | called | came | christian | could | explicit | family | father | i | infant | longer | make | more | my | myself | name | names | nothing | of | or | philip | pip | pirrip | s | so | than | to | tongue|
# | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
# | 2 | 1 | 1 | 1 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | 1 | 2 | 1 | 1 | 1 | 1 | 1 | 3 | 1 | 1 | 1 | 1 | 1 | 1 |
#
# <br>
# Our textual data is hence reduced to a jumbled-up 'vector' of numbers, known somewhat quaintly as a <i>bag-of-words</i>.
# <br>
# <br>
# To do this in practice, we first need to create a vectorizer.
#
# Tfidf vectors tend to be better for training classifiers. Why might that be?
# __Create vectorizer object__
vectorizer = TfidfVectorizer(ngram_range = (1,2), # unigrams and bigrams (1 word and 2 word units)
lowercase = True, # why use lowercase?
max_df = 0.95, # remove very common words
min_df = 0.05, # remove very rare words
max_features = 100) # keep only top 500 features
# This vectorizer is then used to turn all of our documents into a vector of numbers, instead of text.
# First we do it for our training data...
X_train_feats = vectorizer.fit_transform(X_train)
#... then we do it for our test data
X_test_feats = vectorizer.transform(X_test)
# We can also create a list of the feature names.
feature_names = vectorizer.get_feature_names()
# <br>
# Q: What are the first 20 features that are picked out by the CountVectorizer?
# ## Classifying and predicting
# We now have to 'fit' the classifier to our data. This means that the classifier takes our data and finds correlations between features and labels.
#
# These correlations are then the *model* that the classifier learns about our data. This model can then be used to predict the label for new, unseen data.
classifier = LogisticRegression(random_state=42).fit(X_train_feats, y_train)
# Q: How do we use the classifier to make predictions?
y_pred = classifier.predict(X_test_feats)
# Q: What are the predictions for the first 20 examples of the test data?
print(y_pred[0:20])
# We can also inspect the model, in order to see which features are most informative when trying to predict a label.
#
# To do this, we can use the ```show_features``` function that I defined earlier - how convenient!
# Q: What are the most informative features? Use ```show_features```to find out!
clf.show_features(vectorizer, y_train, classifier, n=20)
# ## Evaluate
# The computer has now learned a model of how our data behaves. Well done, computer! But is it accurate?
# Q: How do we measure accuracy?
# <img src="../img/confusionMatrix.jpg">
# Thankfully, libraries like ```sklearn``` come with a range of tools that are useful for evaluating models.
#
# One way to do this, is to use a confusion matrix, similar to what you see above.
# Q: What should go in the argument called ```labels```?
clf.plot_cm(y_test, y_pred, normalized=False)
# We can also do some quick calculations, in order to assess just how well our model performs.
classifier_metrics = metrics.classification_report(y_test, y_pred)
print(classifier_metrics)
# ## Cross validation and further evaluation
# One thing we can't be sure of is that our model performance is simply related to how the train-test split is made.
#
# To try to mitigate this, we can perform cross-validation, in order to test a number of different train-test splits and finding the average scores.
#
# Let's do this on the full dataset
X_vect = vectorizer.fit_transform(texts)
# The first plot is probably the most interesting. Some terminology:
#
# - If two curves are "close to each other" and both of them but have a low score, the model suffers from an underfitting problem (High Bias)
#
# - If there are large gaps between two curves, then the model suffer from an overfitting problem (High Variance)
#
# +
title = "Learning Curves (Logistic Regression)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = LogisticRegression(random_state=42)
clf.plot_learning_curve(estimator, title, X_vect, labels, cv=cv, n_jobs=4)
# -
# - The second plot shows the times required by the models to train with various sizes of training dataset.
# - The third plot show how much time was required to train the models for each training sizes.
|
notebooks/session7.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ZvwnP6Dky_8W"
# ## RNN models for text data
#
# We analyse here data from the Internet Movie Database (IMDB: https://www.imdb.com/).
#
# We use RNN to build a classifier for movie reviews: given the text of a review, the model will predict whether it is a positive or negative review.
#
# #### Steps
#
# 1. Load the dataset (50K IMDB Movie Review)
# 2. Clean the dataset
# 3. Encode the data
# 4. Split into training and testing sets
# 5. Tokenize and pad/truncate reviews
# 6. Build the RNN model
# 7. Train the model
# 8. Test the model
# 9. Applications
#
# + id="NnRXCH49y6_Q"
## import relevant libraries
import re
import nltk
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import itertools
import matplotlib.pyplot as plt
from scipy import stats
from keras.datasets import imdb
from nltk.corpus import stopwords # to get collection of stopwords
from sklearn.model_selection import train_test_split # for splitting dataset
from tensorflow.keras.preprocessing.text import Tokenizer # to encode text to int
from tensorflow.keras.preprocessing.sequence import pad_sequences # to do padding or truncating
from tensorflow.keras.models import Sequential # the model
from tensorflow.keras.layers import Embedding, LSTM, Dense # layers of the architecture
from tensorflow.keras.callbacks import ModelCheckpoint # save model
from tensorflow.keras.models import load_model # load saved model
nltk.download('stopwords')
# + [markdown] id="E7LGxCn73Try"
# #### Reading the data
#
# We raw an extract from IMDB hosted on a Github page:
# + id="EkJTNbf1gMOk"
DATAURL = 'https://raw.githubusercontent.com/hansmichaels/sentiment-analysis-IMDB-Review-using-LSTM/master/IMDB%20Dataset.csv'
# + id="iuDgk_M8g8SJ"
data = pd.read_csv(DATAURL)
print(data)
# + id="meRL6w6Z5HTp"
## alternative way of getting the data, already preprocessed
# (X_train,Y_train),(X_test,Y_test) = imdb.load_data(path="imdb.npz",num_words=None,skip_top=0,maxlen=None,start_char=1,seed=13,oov_char=2,index_from=3)
# + [markdown] id="D2yaSrpD5PRu"
# #### Preprocessing
# + [markdown] id="-_TkfRtV5AAV"
# The original reviews are "dirty", they contain html tags, punctuation, uppercase, stop words etc. which are not good for model training.
# Therefore, we now need to clean the dataset.
#
# **Stop words** are commonly used words in a sentence, usually to be ignored in the analysis (i.e. "the", "a", "an", "of", etc.)
# + id="0OOi_ROPhGWA"
english_stops = set(stopwords.words('english'))
# + id="0uB2FTq_3xH7"
[x[1] for x in enumerate(itertools.islice(english_stops, 10))]
# + id="dB2lpEssuVI6"
def prep_dataset():
x_data = data['review'] # Reviews/Input
y_data = data['sentiment'] # Sentiment/Output
# PRE-PROCESS REVIEW
x_data = x_data.replace({'<.*?>': ''}, regex = True) # remove html tag
x_data = x_data.replace({'[^A-Za-z]': ' '}, regex = True) # remove non alphabet
x_data = x_data.apply(lambda review: [w for w in review.split() if w not in english_stops]) # remove stop words
x_data = x_data.apply(lambda review: [w.lower() for w in review]) # lower case
# ENCODE SENTIMENT -> 0 & 1
y_data = y_data.replace('positive', 1)
y_data = y_data.replace('negative', 0)
return x_data, y_data
x_data, y_data = prep_dataset()
print('Reviews')
print(x_data, '\n')
print('Sentiment')
print(y_data)
# + [markdown] id="fEoL34Peu5F0"
# #### Split dataset
#
# `train_test_split()` function to partition the data in 80% training and 20% test sets
# + id="NyAK4VQnu9eb"
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size = 0.2)
# + [markdown] id="JrvyhnH17hDu"
# #### A little bit of EDA
# + id="7VvJCCug5bNE"
print("x train shape: ",x_train.shape)
print("y train shape: ",y_train.shape)
# + id="ZqxSIAPO8vuo"
print("x test shape: ",x_test.shape)
print("y test shape: ",y_test.shape)
# + [markdown] id="zbNWCRqD82Fe"
# Distribution of classes in the training set
# + id="yiu43jBxyXqK"
plt.figure();
sns.countplot(y_train);
plt.xlabel("Classes");
plt.ylabel("Frequency");
plt.title("Y Train");
# + id="8cFqAVr01iul"
review_len_train = []
review_len_test = []
for i,j in zip(x_train,x_test):
review_len_train.append(len(i))
review_len_test.append(len(j))
# + id="AOeC4snE2lvz"
print("min train: ", min(review_len_train), "max train: ", max(review_len_train))
print("min test: ", min(review_len_test), "max test: ", max(review_len_test))
# + [markdown] id="WTFU56qPxbsT"
# #### Tokenize and pad/truncate
#
# RNN models only accept numeric data, so we need to encode the reviews. `tensorflow.keras.preprocessing.text.Tokenizer` is used to encode the reviews into integers, where each unique word is automatically indexed (using `fit_on_texts`) based on the training data
#
# x_train and x_test are converted to integers using `texts_to_sequences`
#
# Each reviews has a different length, so we need to add padding (by adding 0) or truncating the words to the same length (in this case, it is the mean of all reviews length): `tensorflow.keras.preprocessing.sequence.pad_sequences`
#
#
# + id="5YJKZkiX9WlC"
def get_max_length():
review_length = []
for review in x_train:
review_length.append(len(review))
return int(np.ceil(np.mean(review_length)))
# ENCODE REVIEW
token = Tokenizer(lower=False) # no need lower, because already lowered the data in load_data()
token.fit_on_texts(x_train)
x_train = token.texts_to_sequences(x_train)
x_test = token.texts_to_sequences(x_test)
max_length = get_max_length()
x_train = pad_sequences(x_train, maxlen=max_length, padding='post', truncating='post')
x_test = pad_sequences(x_test, maxlen=max_length, padding='post', truncating='post')
## size of vocabulary
total_words = len(token.word_index) + 1 # add 1 because of 0 padding
print('Encoded X Train\n', x_train, '\n')
print('Encoded X Test\n', x_test, '\n')
print('Maximum review length: ', max_length)
# + id="xvmcsvj0hb5E"
x_train[0,0]
# + [markdown] id="f0m3Tsp6xyO-"
# #### Build model
#
# **Embedding Layer**: it creates word vectors of each word in the vocabulary, and group words that are related or have similar meaning by analyzing other words around them
#
# **LSTM Layer**: to make a decision to keep or throw away data by considering the current input, previous output, and previous memory. There are some important components in LSTM.
#
# - *Forget Gate*, decides information is to be kept or thrown away
# - *Input Gate*, updates cell state by passing previous output and current input into sigmoid activation function
# - *Cell State*, calculate new cell state, it is multiplied by forget vector (drop value if multiplied by a near 0), add it with the output from input gate to update the cell state value.
# - *Ouput Gate*, decides the next hidden state and used for predictions
#
# **Dense Layer**: compute the input from the LSTM layer and uses the sigmoid activation function because the output is only 0 or 1
#
#
# + id="oxifTWPk9jVa"
# ARCHITECTURE
model = Sequential()
model.add(Embedding(total_words, 32, input_length = max_length))
model.add(LSTM(64))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
print(model.summary())
# + [markdown] id="sAbxJ5gxyC3e"
# #### Training the model
#
# For training we fit the x_train (input) and y_train (output/label) data to the RNN model.
# We use a mini-batch learning method with a batch_size of 128 and 5 epochs
#
# + id="eSbW8xEo9l40"
num_epochs = 5
batch_size = 128
checkpoint = ModelCheckpoint(
'models/LSTM.h5',
monitor='accuracy',
save_best_only=True,
verbose=1
)
history = model.fit(x_train, y_train, batch_size = batch_size, epochs = num_epochs, callbacks=[checkpoint])
# + id="Be28nXrNzPne"
plt.figure()
plt.plot(history.history["accuracy"],label="Train");
plt.title("Accuracy")
plt.ylabel("Accuracy")
plt.xlabel("Epochs")
plt.legend()
plt.show();
# + [markdown] id="KriRus7uGo3I"
# #### Testing
# + id="5XZo7UlazYRO"
from sklearn.metrics import confusion_matrix
predictions = model.predict(x_test)
predicted_labels = np.where(predictions > 0.5, "good review", "bad review")
target_labels = y_test
target_labels = np.where(target_labels > 0.5, "good review", "bad review")
con_mat_df = confusion_matrix(target_labels, predicted_labels, labels=["bad review","good review"])
print(con_mat_df)
# + id="BscjHxahGmUn"
y_pred = np.where(predictions > 0.5, 1, 0)
true = 0
for i, y in enumerate(y_test):
if y == y_pred[i]:
true += 1
print('Correct Prediction: {}'.format(true))
print('Wrong Prediction: {}'.format(len(y_pred) - true))
print('Accuracy: {}'.format(true/len(y_pred)*100))
# + [markdown] id="1DxZQ-WhHJiz"
# ### A little application
#
# Now we feed a new review to the trained RNN model, to see whether it will be classified positive or negative.
#
# We go through the same preprocessing (cleaning, tokenizing, encoding), and then move directly to the predcition step (the RNN model has already been trained, and it has high accuracy from cross-validation).
# + id="WA3ydUMDHJN6"
loaded_model = load_model('models/LSTM.h5')
# + id="Lohg7VySVcqj"
review = 'Movie Review: Nothing was typical about this. Everything was beautifully done in this movie, the story, the flow, the scenario, everything. I highly recommend it for mystery lovers, for anyone who wants to watch a good movie!'
# + id="n9KPYoZJWVv2"
# Pre-process input
regex = re.compile(r'[^a-zA-Z\s]')
review = regex.sub('', review)
print('Cleaned: ', review)
words = review.split(' ')
filtered = [w for w in words if w not in english_stops]
filtered = ' '.join(filtered)
filtered = [filtered.lower()]
print('Filtered: ', filtered)
# + id="QpqzYgtqXr4-"
tokenize_words = token.texts_to_sequences(filtered)
tokenize_words = pad_sequences(tokenize_words, maxlen=max_length, padding='post', truncating='post')
print(tokenize_words)
# + id="lz3AXgFuXute"
result = loaded_model.predict(tokenize_words)
print(result)
# + id="DrMAYJV8XzF3"
if result >= 0.7:
print('positive')
else:
print('negative')
# + [markdown] id="wiFNST4Hiypu"
# ## Exercise
#
# Try to write your own movie review, and then have the deep learning model classify it.
#
# 0. write your review
# 1. clean the text data
# 2. tokenize it
# 3. predict and evaluate
# + id="ak9BaixwjTYj"
|
lab_day5/day5_code03 RNN-3 text sequence data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ! pip install pandas --upgrade
# ! easy_install bokeh
from datetime import datetime
Starttime =datetime.now()
Starttime
import pandas as pd
diamonds =pd.read_csv("https://vincentarelbundock.github.io/Rdatasets/csv/ggplot2/diamonds.csv")
diamonds.columns #Single Line Comment starts with #
# name of variables is given by columns. In R we would use the command names(object)
# Note also R uses the FUNCTION(OBJECTNAME) syntax while Python uses OBJECTNAME.FUNCTION
len(diamonds)
diamonds.info()
diamonds.head(20)
import numpy as np
rows = np.random.choice(diamonds.index.values, round(0.0001*len(diamonds)))
rows
diamonds.describe()
diamonds.price.describe()
|
Test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="nn1gBFcM2UAm" colab_type="code" colab={}
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# + id="4b2n7S1z6pDt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f9527e35-5800-4bf1-c559-08fe68d4d408"
# !pip install tld
# + id="JLQ4wYaY3fBt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="3fe80a61-6441-42d3-cf73-7d1680cfab59"
df = pd.read_csv("urldata.csv")
df.head()
# + id="MqnXYnO23n75" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 373} outputId="ab64a1f4-58bd-4aed-daa1-847f4b02eade"
df.describe(include = "all")
# + id="8irt7SgC3qkx" colab_type="code" colab={}
df.drop('Unnamed: 0',axis=1,inplace=True)
# + id="KL6iidSQ5uCD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="52059537-ee18-4c46-a903-18c76c278274"
df.head(10)
# + id="rw6n6i9I50wD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="95f0c7bf-8d32-4587-9958-5993addd5e17"
df.tail(10)
# + id="r2sSW-Bv56YB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="aa95045f-7c5d-4a61-f157-ff6396a90840"
missingdata = df.isnull()
for column in missingdata.columns.values.tolist():
print(column)
print (missingdata[column].value_counts())
print("")
# + id="nshDV1JJ6S3k" colab_type="code" colab={}
from urllib.parse import urlparse
from tld import get_tld
import os.path
# + id="eu7tZ7e46tfq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="858d4d16-62aa-46a2-9095-73640495a33a"
df['Url Length'] = df['url'].apply(lambda i: len(str(i)))
df.head()
# + id="LjUYrK097SLq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="706d8019-7f8f-4d01-ccd0-782bd3bcb5e9"
df['host length'] = df['url'].apply(lambda i: len(urlparse(i).netloc))
df.head()
# + id="uULgNKAO7kZJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 343} outputId="b5b1b900-484e-4d46-e49e-983d01c20b76"
df['path length'] = df['url'].apply(lambda i: len(urlparse(i).path))
df.tail(10)
# + id="Pf8N847H8VpV" colab_type="code" colab={}
def fd_length(url):
urlpath= urlparse(url).path
try:
return len(urlpath.split('/')[1])
except:
return 0
# + id="A6hik3tL8leJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="d2d9271d-f143-4191-de51-e38deaf3364d"
df['First Dir length'] = df['url'].apply(lambda i: fd_length(i))
df.tail()
# + id="J96jMGgR9EwF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="53b8841d-5adf-4817-e2c7-0094df66b0c9"
df['tld'] = df['url'].apply(lambda i: get_tld(i,fail_silently=True))
df.head()
# + id="8YoHFXmt9gCp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="0e423a1b-857b-4bb8-c6c7-bbd8c2d7e541"
df[['url','tld']].tail()
# + id="oIhCr_oa92lE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="40454995-faf7-4cfa-f72c-523cf2c1547a"
df=df.rename_axis('Index')
df.head()
# + id="6qNVWDsW-ZMa" colab_type="code" colab={}
def tld_length(tld):
try:
return len(tld)
except:
return -1
df['tld length'] = df['tld'].apply(lambda i: tld_length(i))
# + id="lUJLYo4v_JqG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="de7c3dc5-62c0-4125-ccfb-3bd22ad40eff"
df.drop('tld',axis = 1,inplace=True)
df.head()
# + id="rpOLOEJc_Rjb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="a73c56ef-fee5-4a11-a2f6-cdebf8847dd0"
df['count-'] = df['url'].apply(lambda i: i.count('-'))
df['count@'] = df['url'].apply(lambda i: i.count('@'))
df['count?'] = df['url'].apply(lambda i: i.count('?'))
df['count%'] = df['url'].apply(lambda i: i.count('%'))
df['count.'] = df['url'].apply(lambda i: i.count('.'))
df['count='] = df['url'].apply(lambda i: i.count('='))
df['count-http'] = df['url'].apply(lambda i : i.count('http'))
df['count-https'] = df['url'].apply(lambda i : i.count('https'))
df['count-www'] = df['url'].apply(lambda i: i.count('www'))
df.tail()
|
Urls.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Budget App
#
# This project is from the freecodecamp.com course "Scientific Computing with Python." Check out the readme for the exact project specifications.
# +
import math
class Category:
def __init__(self, name):
self.name = name
self.ledger = []
self.total = 0
def get_balance(self):
self.total = 0 #setting initial total
for entry in self.ledger: #iterating through entries in ledger
self.total += entry["amount"] #adding the correspending amount (bearing in mind ledger is a list of dicts)
return self.total
def check_funds(self, arg):
total = self.get_balance() #getting initial balance
if arg <= total: #if amount passed in is <= than initial balance, return True
return True
else:
return False
def deposit(self, amount, description = ""): #make a deposit, append new dict to ledger list
self.ledger.append({"amount": amount, "description": description})
def withdraw(self, amount, description = ""):
if self.check_funds(amount) == True: #first check if funds are available
self.ledger.append({"amount": amount* (-1), "description": description}) #if so, add new dict to ledger list
return True
else:
return False
def transfer(self, amount, new_cat):
if self.check_funds(amount) == True: #check if funds available
self.withdraw(amount, "Transfer to " + str(new_cat.name)) #add withdraw entry to ledger
new_cat.deposit(amount, "Transfer from " + str(self.name)) #add deposit entry to ledger of other category
return True
else:
return False
def __str__(self): #this string prints when you print the category- it's the ledger formatted in a string
stars = 30- len(str(self.name)) #need to surround title with *. First get # of stars by subtracting length of name from 30
if stars % 2 == 0: #If # stars is divisible by 2, the number of stars on each side of the name in the title will be even
star_side = stars / 2
self.str = "*" * int(star_side) + str(self.name) + "*" * int(star_side) + "\n"
else:
star_side = stars // 2 #if the number of stars is odd, their will be one extra star on the left
self.str = "*" * int((star_side +1)) + str(self.name) + "*" * star_side + "\n"
for entry in self.ledger: #goes through all entries in ledger (all dictionaries that are in the ledger list)
self.str += "{:<23.23s}{:>7.2f}\n".format(entry["description"], entry["amount"]) #appends the information from the dicts in the correct format
self.str += "Total: " + str(self.get_balance()) #adds the total add the end
return self.str
def create_spend_chart(cats): #passed a list of categories
num_list = [10,9,8,7,6,5,4,3,2,1,0] #initializing lists
cat_list = [] #this list is used to calculate total withdrawals/ percentages
output= "" #output string to add to
overall_total = 0
name_list = [] #this list is used to help with the name formatting further down
for x in cats: #iterates all the categories in the list we were passed
name_list.append(list(x.name))#appends a list of the individuals letters to the list "name_list" (a list of letter is used so we can use pop() later)
for x in cats: #creating a list of dicts to track total spent
cat_list.append({"category": x, "total_spent": 0})
for entry in cat_list: #for each category in the list
total = 0 #our initial total is zero
for line in entry["category"].ledger: #we then iterate over each line in the ledger for each category
if line["amount"] < 0: #if the amount in the line is less than 0 (i.e., it was a withdrawal)
total += ((line["amount"]) * (-1)) #then we add it to the total variable
entry.update({"total_spent": total}) #at the end, we update the dictionaries listed in cat_list
overall_total += entry["total_spent"] #then add that to the overall total spent (i.e., all categories combined)
for entry in cat_list: #for loop calculates the percentages and adds a new dictionary entry to each dictionary in cat_list
print(round(entry["total_spent"]/overall_total, 1) * 100)
entry.update({"percent": math.floor((entry["total_spent"]/overall_total)* 10) * 10}) #math.floor rounds down to nearest 1
output += "Percentage spent by category\n" #starting to construct the output string
for i in num_list: # goes through the list of numbers
output += "{:>3s}".format(str(i*10)) +"| " #formats numbers on side
for entry in cat_list: #checks to compare the numbers with the category percentages, adds "o" if >=
if entry["percent"] >= i*10:
output += "o "
else:
output += " " #otherwise, adds spacing
output += "\n"
output += " " + "-" + ("-" * 3 * len(cat_list)) + "\n" #dash line added to output string
longest = ""
for entry in cats: #goes through the list cats (cats is a list of lists)
if len(entry.name) > len(longest): #if the length of each entry (which is a list) is longer than the variable longest
longest = entry.name #then longest is updated
longest_index = cats.index(entry) #and the index is updated.
#this loop is used to find the longest category name and the index of that name for the next section
flip = 0
while flip == 0: #formats names at bottom of chart.
output += " " * 5
for entry in name_list: # for each entry in name_list
if entry != []: # checks to see if that entry (which is a list) is empty
output += entry.pop(0) + " " # if it isn't, we pop the first thing in the list (i.e., the next letter of the category name)
else:
output += " " #otherwise we just add spaces
if name_list[longest_index] == []: #if the list of the longest category name is blank, the while loop ends
flip = 1
else:
output += "\n"
return output
# +
food = Category("Food")
entertainment = Category("Entertainment")
business = Category("Business")
food.deposit(900, "deposit")
food.withdraw(45.67, "milk, cereal, eggs, bacon, bread")
food.transfer(20, entertainment)
expected = "*************Food*************\ndeposit 900.00\nmilk, cereal, eggs, bac -45.67\nTransfer to Entertainme -20.00\nTotal: 834.33"
print(expected)
print(food)
# +
x = "Percentage spent by category\n100| \n 90| \n 80| \n 70| o \n 60| o \n 50| o \n 40| o \n 30| o \n 20| o o \n 10| o o \n 0| o o o \n ----------\n B F E \n u o n \n s o t \n i d e \n n r \n e t \n s a \n s i \n n \n m \n e \n n \n t "
print(x)
food.deposit(900, "deposit")
entertainment.deposit(900, "deposit")
business.deposit(900, "deposit")
food.withdraw(105.55)
entertainment.withdraw(33.40)
business.withdraw(10.99)
print(create_spend_chart([business, food, entertainment]))
# -
|
Budget App.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab
# %matplotlib inline
import carna.py as cpy
import skimage.io
import scipy.ndimage as ndi
print(f'Carna {cpy.version} (CarnaPy {cpy.py_version})')
# # Example data
#
# **3D microscopy data used for examples:** [*<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., et al., 2018. 3D cell nuclear morphology: Microscopy imaging dataset and voxel-based morphometry classification results, in: Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE. pp. 2272–2280.*](http://www.socr.umich.edu/projects/3d-cell-morphometry/data.html)
#
# **First, consider the following 2D example.**
#
# The image is from a 3D stack:
data = skimage.io.imread('../../testdata/08_06_NormFibro_Fibrillarin_of_07_31_Slide2_num2_c0_g006.tif').T
data = data / data.max()
# The z-spacing is unknown, so we will just assume that z-resolution is 4 times lower than x-/y-resolution:
spacing = (1, 1, 4)
# This is effectively the width, height, and depth of a voxel.
# # Illustration of the example setup in 2D
#
# Lets define some example markers and the camera position:
# +
markers = array([
[ 40, 600, 15],
[110, 610, 16],
[150, 665, 15],
[180, 700, 17],
[180, 740, 18],
])
camera_position = [400, 200, 50]
# -
# This example setup is shown in 2D below. The markers correspond to red dots, and the position of the camera corresponds to the green dot:
# +
vd = (markers[2] - camera_position + 0.)[:2][::-1]
vd /= linalg.norm(vd)
# cp = camera_position[:2][::-1]
R1 = array([[0, -1], [ 1, 0]])
R2 = array([[0, 1], [-1, 0]])
rot = lambda x: array([[cos(x), -sin(x)],[sin(x), cos(x)]])
vpl = rot(+pi/4) @ vd * 500
vpr = rot(-pi/4) @ vd * 500
imshow(data[:,:, 15], 'gray')
colorbar()
scatter(*markers[:,:-1][:,::-1].T, c='r')
scatter([cp[0]], [cp[1]], c='g')
_xlim, _ylim = xlim(), ylim()
plot([cp[0], cp[0] + vpl[0]], [cp[1], cp[1] + vpl[1]], '--g')
plot([cp[0], cp[0] + vpr[0]], [cp[1], cp[1] + vpr[1]], '--g')
xlim(*_xlim)
ylim(*_ylim)
tight_layout()
# -
# The green lights indicate the **field of view** of the virtual camera (90 degree).
# # Direct volume rendering
# **Example 1.** Use `dvr` to issue a direct volume rendering:
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, normals=True) ## declaration of the volume data
rc.dvr(translucency=2, sample_rate=500)
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, volume.map_voxel_coordinates(markers), parent=volume)
rc.camera.translate(*volume.map_voxel_coordinates([camera_position])[0]) \
.look_at(volume.map_voxel_coordinates(markers)[2], up=(0,0,1))
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# Remember to pass `normals=True` to the declaration of the volume data via `volume` to issue a computation of the normal vectors, which is required to perform lighting. Also, note that the `dtype` of data is `float64`:
data.dtype
# However, the data is only 8bit, but `skimage.io` converts it into 64bit when loading. This is okay for system memory, but video memory is usaully rather limited, and although Carna currently only supports 8bit and 16bit volume data, wasting factor 2 is not very appealing. This is what the above warning indicates. To circuvent, simply use the `fmt_hint` parameter for the volume declaration:
# **Example 2.** Use `fmt_hint='uint8'` to suggest 8bit volume data representation:
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, normals=True, fmt_hint='uint8') ## declaration of the volume data
rc.dvr(translucency=2, sample_rate=500)
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, volume.map_voxel_coordinates(markers), parent=volume)
rc.camera.translate(*volume.map_voxel_coordinates([camera_position])[0]) \
.look_at(volume.map_voxel_coordinates(markers)[2], up=(0,0,1))
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# **Example 3.** You can also use a more sophisticated color map, like $[0,0.2) \mapsto$ teal and $[0.4,1] \mapsto$ yellow:
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, normals=True, fmt_hint='uint8') ## declaration of the volume data
rc.dvr(translucency=2, sample_rate=500,
color_map=[(0, 0.2, (0, 1, 1, 0), (0, 1, 1, 0.2)), (0.4, 1.0, (1, 1, 0, 0), (1, 1, 0, 1))])
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, volume.map_voxel_coordinates(markers), parent=volume)
rc.camera.translate(*volume.map_voxel_coordinates([camera_position])[0]) \
.look_at(volume.map_voxel_coordinates(markers)[2], up=(0,0,1))
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# **Example 4.** Direct volume rendering can also be performed without lighting:
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, fmt_hint='uint8')
rc.dvr(translucency=10, sample_rate=500)
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, volume.map_voxel_coordinates(markers), parent=volume)
rc.camera.translate(*volume.map_voxel_coordinates([camera_position])[0]) \
.look_at(volume.map_voxel_coordinates(markers)[2], up=(0,0,1))
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# Omitting `normals=True` if lighting is not required speeds up the `volume` command but produces less realistic renderings.
# # Maximum intensity projection
# **Example 5.** Use `rc.mip` to specify a maximum instensity projection:
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, fmt_hint='uint8')
rc.mip(sample_rate=500)
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, volume.map_voxel_coordinates(markers), parent=volume)
rc.camera.translate(*volume.map_voxel_coordinates([camera_position])[0]) \
.look_at(volume.map_voxel_coordinates(markers)[2], up=(0,0,1))
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# **Example 6.** Use the `layers` parameter of `rc.mip` to specify the color map and/or multiple layers.
#
# In this example, intensities $[0,0.2)$ are mapped linearly to blue, whereas intensities $[0.4, 1]$ are mapped linearly to yellow:
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, fmt_hint='uint8')
rc.mip(sample_rate=500, layers=[(0, 0.2, (0, 0, 1, 0.2)), (0.4, 1, (1, 1, 0, 1))])
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, volume.map_voxel_coordinates(markers), parent=volume)
rc.camera.translate(*volume.map_voxel_coordinates([camera_position])[0]) \
.look_at(volume.map_voxel_coordinates(markers)[2], up=(0,0,1))
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# # Cutting plane rendering
# **Example 7.** Use `rc.plane` to define cutting planes: (we also change the camera position)
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, fmt_hint='uint8')
markers_in_volume = volume.map_voxel_coordinates(markers)
rc.plane((0,0,1), markers_in_volume[0], parent=volume) ## plane through first marker, normal along z-axis
rc.plane((1,0,0), markers_in_volume[0], parent=volume) ## plane through first marker, normal along x-axis
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, markers_in_volume, parent=volume)
rc.camera.translate(*volume.map_voxel_coordinates([[400, 200, 80]])[0]) \
.look_at(volume.map_voxel_coordinates([markers.mean(axis=0)]), up=(0,0,1)) \
.translate(-35,0,-450)
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# **Example 8.** Add `rc.occluded()` to visualize visually occluded geometry: (note that the markers are half-translucent)
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, fmt_hint='uint8')
markers_in_volume = volume.map_voxel_coordinates(markers)
rc.plane((0,0,1), markers_in_volume[0], parent=volume) ## plane through first marker, normal along z-axis
rc.plane((1,0,0), markers_in_volume[0], parent=volume) ## plane through first marker, normal along x-axis
marker_mesh = rc.ball(radius=15)
marker_material = rc.material(color=(1,0,0,1))
rc.meshes(marker_mesh, marker_material, markers_in_volume, parent=volume)
rc.occluded()
rc.camera.translate(*volume.map_voxel_coordinates([[400, 200, 80]])[0]) \
.look_at(volume.map_voxel_coordinates([markers.mean(axis=0)]), up=(0,0,1)) \
.translate(-35,0,-450)
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# # Combining different visualization techniques
# **Example 9.** This example shows the combination of maximum intensity projection and cutting planes: (the markers are left out for clarity)
# +
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
volume = rc.volume(data, spacing=spacing, fmt_hint='uint8')
markers_in_volume = volume.map_voxel_coordinates(markers)
rc.plane((0,0,1), markers_in_volume[0], parent=volume) ## plane through first marker, normal along z-axis
rc.plane((1,0,0), markers_in_volume[0], parent=volume) ## plane through first marker, normal along x-axis
rc.mip(layers=[(0.4, 1, (0, 1, 0, 1))], sample_rate=500)
rc.camera.translate(*volume.map_voxel_coordinates([[400, 200, 80]])[0]) \
.look_at(volume.map_voxel_coordinates([markers.mean(axis=0)]), up=(0,0,1)) \
.translate(-35,0,-450)
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# **Example 10.** 3D masks can also be rendered:
# +
segmentation_mask_3d = ndi.label(ndi.binary_opening(ndi.gaussian_filter(data, 1) > 0.06))[0]
with cpy.SingleFrameContext((512, 1024), fov=90, near=1, far=1000) as rc:
rc.volume(data, spacing=spacing, normals=True, fmt_hint='uint8') ## declaration of the volume data
rc.dvr(translucency=2, sample_rate=500)
rc.mask(segmentation_mask_3d, 'borders-on-top', spacing=spacing)
rc.camera.translate(*volume.map_voxel_coordinates([[400, 200, 35]])[0]) \
.look_at(volume.map_voxel_coordinates([markers.mean(axis=0)]), up=(0,0,1)) \
.rotate((1,0,0), -10, 'deg')
figure(figsize=(8,6))
imshow(rc.result)
tight_layout()
# -
# Different flavors of mask renderings are available:
# - `borders-on-top`: Borders are rendered above the image (see above)
# - `regions-on-top`: Mask regions are rendered above the image
# - `borders-in-background`: The borders are rendered in the background
# - `regions`: The 3D regions are rendered as solid objects
#
# The mask used for rendering can be either a binary mask or a gray-value mask (e.g., to identify individual objects).
|
examples/kalinin2018.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/shyam1234/PythonTutorials/blob/master/Python_Operators.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="MxYGU7FAfoAB" colab_type="text"
# **Python Operators**
#
# Operators are used to perform operations on variables and values.
# Python divides the operators in the following groups:
# ```
# Arithmetic operators
# Assignment operators
# Comparison operators
# Logical operators
# Identity operators
# Membership operators
# Bitwise operators
# ```
#
#
# + [markdown] id="TFRDTS5ZgH9v" colab_type="text"
# **Python Arithmetic Operators**
#
# Arithmetic operators are used with numeric values to perform common mathematical operations:
# + id="8SV-rIYdgLwd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b07a3c2a-0bea-434a-d821-a0e14b502db8"
a = 5 + 4
print (a)
# + [markdown] id="uD6xfhyRgdp2" colab_type="text"
# **Python Assignment Operators**
#
# Assignment operators are used to assign values to variables:
# + id="7QkXBUj4gVrP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d3b1c4a8-95c1-4a9c-8bfc-910d432d6b53"
x = 30
x /= 3
print(x)
# + id="paCzGI2agocr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f97ce3e0-a068-43ee-ff1b-736cc0ff273d"
x= 34
x %= 3
print(x)
# + id="TBpiIIh4gvrl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="93a45543-c164-4e6a-8397-41a301c8cdc4"
x=40
x //= 3
print(x)
# + id="Lenr3lMyguYm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c0a8c36f-72b8-4763-a9a9-ea4d2783920c"
x=2
x **= 3
print(x)
# + id="za7d7dCwgtB3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="20f1a756-b43a-4ade-998b-a03cf843347f"
x=40
x >>= 3
print(x)
# + [markdown] id="G91FMBI7hFvl" colab_type="text"
# **Python Comparison Operators**
#
# Comparison operators are used to compare two values:
# + id="Z2MmBlzqglD2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fdabda64-e2b8-4726-81b8-967521467d6d"
x= 20
y= 10
z = x >= y
print(z)
# + [markdown] id="7_0nlZPLhTwl" colab_type="text"
# **Python Logical Operators**
#
# Logical operators are used to combine conditional statements:
# + id="-dBgSQ6-hQdA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="380434c6-8aea-4c8e-cd41-8265108df132"
x = 2
print(not(x < 5 and x < 10))
print(x)
# + [markdown] id="v1FbUVNnhkQb" colab_type="text"
# **Python Identity Operators**
#
# Identity operators are used to compare the objects, not if they are equal, but if they are actually the same object, with the same memory location
#
#
# + id="P7_t0LHthZ0L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="047fd292-4f9f-41e7-b419-8056fa045588"
x = 3
y = 2
print (x is not y)
# + id="K7hTlvu1hX45" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1d780577-fa2a-4809-a2c1-ed9351d8afb8"
x = 3
y = 3
print (x is y)
# + [markdown] id="lHT31IRjibmR" colab_type="text"
# **Python Membership Operators**
#
# Membership operators are used to test if a sequence is presented in an object
#
# + id="M_P2Q_0Oie2V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="40f7dae9-c94e-411f-bcf8-90504581488a"
x = ["apple", "banana"]
print("banana" in x)
# returns True because a sequence with the value "banana" is in the list
# + id="K0ZkChefinB-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b616c86d-1246-4167-d89a-259a24d35d95"
x = ["apple", "banana"]
print("pineapple" not in x)
# returns True because a sequence with the value "pineapple" is not in the list
# + [markdown] id="ApVhyjIXjF-R" colab_type="text"
# **Python Bitwise Operators**
#
# Bitwise operators are used to compare (binary) numbers:
# + id="rNgDGOIPi02F" colab_type="code" colab={}
# & AND Sets each bit to 1 if both bits are 1
# | OR Sets each bit to 1 if one of two bits is 1
# ^ XOR Sets each bit to 1 if only one of two bits is 1
# ~ NOT Inverts all the bits
# << Zero fill left shift Shift left by pushing zeros in from the right and let the leftmost bits fall off
# >> Signed right shift Shift right by pushing copies of the leftmost bit in from the left, and let the rightmost bits fall off
# + [markdown] id="HJWWZ380fnkd" colab_type="text"
#
|
Python_Operators.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# +
# import matplotlib as mpl
# import matplotlib.pyplot as plt
# # %matplotlib inline
# +
# Click into this cell and press [Shift-Enter] to start.
# %run "sandpit-exercises.ipynb"
sp = sandpit_random_test()
def next_step(f, J, H) :
gamma = 0.5
return -gamma * J
sp.next_step = next_step
sp.game_mode=2
sp.draw()
sp.showContours()
# -
sp.revealed = False
sp.draw()
sp.nGuess = 0
sp.showContours()
|
_math/Imperial College London - Math for ML/Sandpit+Debug.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Example with DataLinks and LinkLikelihood class
# Testing a way to measure scores from co-occurences in the data.
# +
# data locations
DATASET = "C:\\Users\\FlorianHuber\\OneDrive - Netherlands eScience Center\\Project_Wageningen_iOMEGA"
PATH_MS2LDA = DATASET + "\\lda\\code\\"
PATH_MGF_DATA = DATASET + "\\Data\\Crusemann_dataset\\Crusemann_only_Clutered_Data\\"
MIBIG_JSON_DIR = DATASET + "\\Data\\mibig\\mibig_json_1.4"
NODES_FILE = PATH_MGF_DATA + "clusterinfosummarygroup_attributes_withIDs\\0d51c5b6c73b489185a5503d319977ab..out"
MGF_FILE = PATH_MGF_DATA + "METABOLOMICS-SNETS-c36f90ba-download_clustered_spectra-main.mgf"
EDGES_FILE = PATH_MGF_DATA + 'networkedges_selfloop\\9a93d720f69143bb9f971db39b5d2ba2.pairsinfo'
ROOT_PATH = DATASET + "\\Data\mibig_select\\"
FOLDERS = ['NRPS','Others','PKSI','PKS-NRP_Hybrids','PKSother','RiPPs','Saccharides','Terpene']
ANTISMASH_DIR = DATASET +"\\Data\\Crusemann_dataset\\bgc_crusemann\\"
from nplinker_constants import nplinker_setup
nplinker_setup(LDA_PATH=PATH_MS2LDA)
# +
# import from NPlinker
from metabolomics import load_spectra
from metabolomics import load_metadata
from metabolomics import load_edges
from metabolomics import make_families
from genomics import loadBGC_from_cluster_files
from genomics import make_mibig_bgc_dict
from data_linking import DataLinks
from data_linking import LinkLikelihood
from data_linking import LinkFinder
# import general packages
import os
import glob
# +
# load, initialize data
nplinker_setup(LDA_PATH=PATH_MS2LDA)
spectra = load_spectra(MGF_FILE)
load_edges(spectra, EDGES_FILE)
#families = make_families(spectra)
metadata = load_metadata(spectra, NODES_FILE)
input_files = []
ann_files = []
mibig_bgc_dict = make_mibig_bgc_dict(MIBIG_JSON_DIR)
for folder in FOLDERS:
fam_file = os.path.join(ROOT_PATH, folder)
cluster_file = glob.glob(fam_file + os.sep + folder + "_clustering*")
annotation_files = glob.glob(fam_file + os.sep + "Network_*")
input_files.append(cluster_file[0])
ann_files.append(annotation_files[0])
gcf_list, bgc_list, strain_list = loadBGC_from_cluster_files(input_files, ann_files, antismash_dir=ANTISMASH_DIR, antismash_format = 'flat', mibig_bgc_dict=mibig_bgc_dict)
# -
# Now the data from the gene cluster families and spectra is loaded and initilized.
#
# The classes **DataLinks** and **LinkProbability** were written to test a possible alternative way to get correlation scores. The scoring is based on creating numpy co-occurence matrices which in principle should allow for very fast calculations.
# extract relevant linking mappings calculate co-occurences
data_links = DataLinks()
data_links.load_data(spectra, gcf_list, strain_list)
data_links.find_correlations(include_singletons=False)
# Calculate link probabilities, such as:
# P(gcf_x | spec_y) = probability to find gcf_x in a strain, given spec_y is present
# P(gcf_x | not spec_y) = probability to find gcf_x in a strain, given spec_y is NOT present
likelihoods = LinkLikelihood()
likelihoods.calculate_likelihoods(data_links, type='spec-gcf')
likelihoods.calculate_likelihoods(data_links, type='fam-gcf')
# ### Select potential link candidates
# #### Search for links between GCFs and spectra, and between GCFs and mol. families
# Can now be done using the metcalf score or based on the likelihood ("likelihood score").
# +
linkcandidates = LinkFinder()
linkcandidates.metcalf_scoring(data_links,
both=10,
type1_not_gcf=-10,
gcf_not_type1=0,
type='spec-gcf')
linkcandidates.metcalf_scoring(data_links,
both=10,
type1_not_gcf=-10,
gcf_not_type1=0,
type='fam-gcf')
# -
# Here is the second type of score(the one I had used before). Like the metcalf score it takes into account the directionality BGC-->compound-->spectrum, which suggests that the most relevant likelihoods are:
#
# P(gcf|type1) - If type1 is the result of only one particular gene cluster,
# this value should be high (close or equal to 1)
#
# P(type1|not gcf) - Following the same logic, this value should be very
# small or 0 ("no gene cluster, no compound") P(gcf_x | spec_y).
#
# This is then weighted by the number of strains this co-occurence was found in.
# +
linkcandidates.likelihood_scoring(data_links, likelihoods,
alpha_weighing=0.5,
type='spec-gcf')
linkcandidates.likelihood_scoring(data_links, likelihoods,
alpha_weighing=0.5,
type='fam-gcf')
# -
# Using those scores it is then possible to select suitable, promising candidates for potential links between spectra and GCFs, or mol.families and GCFs.
# +
link_candidates_spec = linkcandidates.select_link_candidates(data_links, likelihoods,
P_cutoff=0.9,
main_score='likescore',
score_cutoff=0.6,
type='spec-gcf')
link_candidates_fam = linkcandidates.select_link_candidates(data_links, likelihoods,
P_cutoff=0.8,
main_score='likescore',
score_cutoff=0,
type='fam-gcf')
# -
# What becomes apparent is that the score I used and metcalf are indeed very similar in the sense that they nearly sort candidates the same way. The score I use here ("likescore"or "likelihood score") might have the advantage that it is normalized to (0,1).
# Show table of potential gcf<-> spectrum link candidates
#link_candidates_fam.head()
link_candidates_fam.nlargest(10, 'likelihood score')
import numpy as np
np.mean(linkcandidates.metcalf_spec_gcf)
# ## Metcalf score with neither:
# Still seems problematic to me. For the Crusemann dataset most GCFs and spectra only occur in 1 or 2 strains. Given that we have about 140 strains, they would still score around +100 to +130 quite often.
from data_linking import RandomisedDataLinks
# +
ms = linkcandidates.metcalf_scoring(data_links,
both=10,
type1_not_gcf=-10,
gcf_not_type1=0,
not_type1_not_gcf=0,
type='spec-gcf')
print(linkcandidates.metcalf_spec_gcf.shape)
ls = linkcandidates.likescores_spec_gcf
print(ls.shape)
# -
# Moving on to do some tests with the
# ### RandomisedDataLinks...
# To see how reproducible the method is I ran RamdomisedDatLinks 50 times and compared the results:
# +
test_runs = 50
test_max_rms = np.zeros((ls.shape[0], test_runs))
test_max_rls = np.zeros((ls.shape[0], test_runs))
for i in range(20):
rdata_links = RandomisedDataLinks.from_datalinks(data_links)
rdata_links.find_correlations(include_singletons=False)
rms = linkcandidates.metcalf_scoring(rdata_links, not_type1_not_gcf=0, type='spec-gcf')
rls = linkcandidates.likelihood_scoring(rdata_links, likelihoods,
alpha_weighing=0.5,
type='spec-gcf')
test_max_rms[:,i] = np.max(rms, axis=1)
test_max_rls[:,i] = np.max(rls, axis=1)
# +
import pandas as pd
rms_summary = np.zeros((ls.shape[0], 4))
rms_summary[:,0] = np.mean(test_max_rms, axis=1)
rms_summary[:,1] = np.std(test_max_rms, axis=1)
rms_summary[:,2] = np.min(test_max_rms, axis=1)
rms_summary[:,3] = np.max(test_max_rms, axis=1)
rms_summary = pd.DataFrame(rms_summary, columns = ['Mean','STD','Min','Max'])
rls_summary = np.zeros((ls.shape[0], 4))
rls_summary[:,0] = np.mean(test_max_rls, axis=1)
rls_summary[:,1] = np.std(test_max_rls, axis=1)
rls_summary[:,2] = np.min(test_max_rls, axis=1)
rls_summary[:,3] = np.max(test_max_rls, axis=1)
rls_summary = pd.DataFrame(rls_summary, columns = ['Mean','STD','Min','Max'])
# -
rms_summary.head(10) # Metcalf score (neither =0)
rls_summary.head(10) # Likelihood score
# ### Issues with significancy measure
# It appears like the max_rand and min_rand within the get_sig_links function are not a good-enough measure to define if a potential link is really significant.
# In the end it fluctuated a lot and will give different results every time we run it.
#
# My feeling is, that we can not assign a significance value per GCF or per spectra (at least not this way).
# What we can get is a feeling for how likely it is to get certain likelihood or metcalf scores simply by chance!
#
# Unfortunately, no matter how high the score, it seems that chances remain fairly high that it will only be a false positive.
# Counting scores in bins
sum_rls = []
sum_ls = []
for i in range(100):
sum_rls.append(np.sum((rls > i/100) & (rls < (i+1)/100)))
sum_ls.append(np.sum((ls > i/100) & (ls < (i+1)/100)))
from matplotlib import pyplot as plt
plt.plot(sum_rls, 'r', label='randomized data')
plt.plot(sum_ls, 'black', label='original data')
plt.legend()
# same for metcalf scores (here only positive ones)
sum_rms = []
sum_ms = []
for i in range(100):
sum_rms.append(np.sum((rms > i) & (rms <= (i+1))))
sum_ms.append(np.sum((ms > i) & (ms <= (i+1))))
plt.plot(sum_rms, 'r', label='randomized data')
plt.plot(sum_ms, 'black', label='original data')
plt.legend()
# #### Export to cytoscape
# There is now an added function to create network files that can be imported to Cytoscape.
# This will create a network from spectra <-> GCF links (including molecular family member links)
# Output is a graphml file
linkcandidates.create_cytoscape_files(data_links,
'test_network.graphml',
link_type='spec-gcf',
score_type='likescore')
# do some test plotting to inspect the kind of results we get from there...
M_links = linkcandidates.plot_candidates(P_cutoff=0.95,
score_type='likescore',
score_cutoff=0.6,
type='spec-gcf')
# #### Show link candidates between GCFs and molecular families
# Again done based on "likelihood score", but using metcalf scores gives nearly identical results here.
# do some test plotting to inspect the kind of results we get from there...
M_links = linkcandidates.plot_candidates(P_cutoff=0.9,
score_type='likescore',
score_cutoff=0.5,
type='fam-gcf')
|
notebooks/nplinker_correlation_probabilities.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from data_repo import ipa_data
import numpy as np
import matplotlib.pyplot as plt
ipa_features = ipa_data['features']
row_count, feature_count = ipa_features.shape
# similarity(X, Y) = F[f(X intersec Y) / (f(X intersec Y) + f(X - Y) + f(Y - X))]
similarity = np.zeros((row_count, row_count))
pos_features = ipa_features == 1
feature_mask = ipa_features != 1
for i in range(row_count):
x = pos_features[i]
x_intersec_y = np.logical_and(x, pos_features)
x_minus_y = np.logical_and(x, feature_mask)
for j in range(row_count): # j is the enumerator of y
intersec_sum = np.sum(x_intersec_y[j])
x_minus_y_sum = np.sum(x_minus_y[j])
cur_y = pos_features[j]
x_mask = x != 1
y_minus_x = np.logical_and(cur_y, x_mask)
cur_similarity = intersec_sum / (intersec_sum + x_minus_y_sum + np.sum(y_minus_x))
similarity[i, j] = cur_similarity
print(similarity)
fig = plt.figure(figsize=(20,20))
plt.imshow(similarity, cmap='hot', interpolation='nearest')
plt.show()
# -
|
feature_contrast.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# <img src="../images/aeropython_logo.png" alt="AeroPython" style="width: 300px;"/>
# # Ejercicios bucles y condicionales
# _Vamos a afianzar los conocimientos de Python que acabamos de adquirir haciendo algunos ejercicios, y así retener las peculiaridades de la sintaxis y aclarar algunos detalles a tener en cuenta cuando se trabaja en modo interactivo._
# ## Ejercicio 1: Sumatorio
# Vamos a escribir ahora una función que sume los `n` primeros números naturales. Observa que podemos escribir una **cadena de documentación** (_docstring_) justo debajo de la definición de la función para explicar lo que hace.
def sumatorio(num):
"""Suma los `num` primeros números.
Ejemplos
--------
>>> sumatorio(4)
10
"""
suma = 0
for nn in range(1, num + 1):
suma = nn + suma
return suma
# Lo que hemos hecho ha sido inicializar el valor de la suma a 0 e ir acumulando en ella los `num` primeros números naturales.
sumatorio(4)
help(sumatorio)
# <div class="alert alert-warning">Observa lo que sucede si no inicializamos la suma:</div>
def sumatorio_mal(num):
for nn in range(1, num + 1):
suma = nn + suma
return suma
sumatorio_mal(4)
# Para comprobar el resultado correcto, nada como acudir a la función `sum` de Python, que suma los elementos que le pasemos:
list(range(1, 4 + 1))
sum(range(1, 4 + 1))
# ## Ejercicio 2: Sumatorio con cota superior
# Ahora nuestra función es un poco más rara: tiene que sumar números naturales consecutivos y no pasarse de un determinado límite. Además, queremos el valor de la suma.
def suma_tope(tope):
"""Suma números naturales consecutivos hasta un tope.
"""
suma = 0
nn = 1
while suma + nn <= tope:
suma = suma + nn
nn += 1
return suma
suma_tope(9)
suma_tope(9) == 1 + 2 + 3
suma_tope(10) == 1 + 2 + 3 + 4
# La palabra clave `assert` recibe una expresión verdadera o falsa, y falla si es falsa. Si es verdadera no hace nada, con lo cual es perfecto para hacer comprobaciones a mitad del código que no estorben mucho.
assert suma_tope(11) == 1 + 2 + 3 + 4
assert suma_tope(10 + 5) == 1 + 2 + 3 + 4
# ## Ejercicio 3: Normativa de exámenes
# La normativa de exámenes es: *"si un examen dura más de 3 horas, entonces debe tener un descanso"*. Los argumentos de la función son el tiempo en horas y un valor `True` o `False` que indica si hay descanso o no.
def cumple_normativa(tiempo, descanso):
"""Comprueba si un examen cumple la normativa de la UPM.
"""
if tiempo <= 3:
return True
else:
#if descanso:
# return True
#else:
# return False
return descanso # ¡Equivalente!
cumple_normativa(2, False)
if not cumple_normativa(5, descanso=False):
print("¡Habla con DA!")
# ## Ejercicio 4
# Hallar $x = \sqrt{S}$.
#
# 1. $\displaystyle \tilde{x} \leftarrow \frac{S}{2}$.
# 2. $\displaystyle \tilde{x} \leftarrow \frac{1}{2}\left(\tilde{x} + \frac{S}{\tilde{x}}\right)$.
# 3. Repetir (2) hasta que se alcance un límite de iteraciones o un criterio de convergencia.
#
# http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method
def raiz(S):
x = S / 2
while True:
temp = x
x = (x + S / x) / 2
if temp == x:
return x
# Aquí estoy usando un truco de la aritmética en punto flotante: como la convergencia se alcanza rápidamente, llega un momento en que el error es menor que la precisión de la máquina y el valor no cambia de un paso a otro.
raiz(10)
# <div class="alert alert-info">Se deja como ejercicio implementar otras condiciones de convergencia: error relativo por debajo de un umbral o número máximo de iteraciones.</div>
import math
math.sqrt(10)
raiz(10) ** 2
math.sqrt(10) ** 2
# Ahora tienes curiosidad, ¿verdad? :) http://puntoflotante.org/
# ## Ejercicio 5
# Secuencia de Fibonacci: $F_n = F_{n - 1} + F_{n - 2}$, con $F_0 = 0$ y $F_1 = 1$.
#
# $$0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...$$
# Con iteración:
def fib(n):
a, b = 0, 1
for i in range(n):
a, b = b, a + b # Bendita asignación múltiple
return a
fib(0), fib(3), fib(10)
# Con recursión:
def fib_recursivo(n):
if n == 0:
res = 0
elif n == 1:
res = 1
else:
res = fib_recursivo(n - 1) + fib_recursivo(n - 2)
return res
# Imprimir una lista con los $n$ primeros:
def n_primeros(n):
F = fib_recursivo
lista = []
for ii in range(n):
lista.append(F(ii))
return lista
n_primeros(10)
# ## <NAME> <NAME>
# Implementar el sistema de reparto de escaños d'Hondt. Dicho sistema se basa en ir repartiendo escaños consecutivamente al partido con el máximo coeficiente, $c_i = \frac{V_i}{s_i + 1}$, donde $V_i$ es el número total de votos obtenido por del partido $i$, mientras que $s_i$ es el número de escaños asignados dicho partido (0 al comenzar el reparto).
#
# Veamos por ejemplo el caso expuesto en [Wikipedia](https://es.wikipedia.org/wiki/Sistema_d'Hondt):
#
# | *Partido* | Partido A | Partido B | Partido C | Partido D | Partido E |
# |:----------|----------:|----------:|----------:|----------:|----------:|
# | *Votos* | 340000 | 280000 | 160000 | 60000 | 15000 |
#
# Todavía no hay ningún escaño asignado, así que los votos de cada partido se dividen por 1:
#
# | *Partido* | Partido A | Partido B | Partido C | Partido D | Partido E |
# |:-----------|-----------:|-----------:|-----------:|-----------:|-----------:|
# | *Votos* | 340000 | 280000 | 160000 | 60000 | 15000 |
# | *Escaño 1* | **340000** | 280000 | 160000 | 60000 | 15000 |
#
# Y por tanto el partido A recibe el primer escaño. Para repartir el segundo escaño se vuelven a dividr por 1 los votos de cada partido, salvo el partido A que se divide por 2, pues ya tiene un escaño:
#
# | *Partido* | Partido A | Partido B | Partido C | Partido D | Partido E |
# |:-----------|-----------:|-----------:|-----------:|-----------:|-----------:|
# | *Votos* | 340000 | 280000 | 160000 | 60000 | 15000 |
# | *Escaño 1* | **340000** | 280000 | 160000 | 60000 | 15000 |
# | *Escaño 2* | 170000 | **280000** | 160000 | 60000 | 15000 |
#
# Así pues, el segundo escaño va para el partido B. Si se reparten 7 escaños como en el ejemplo de Wikpedia, la tabla final quedaría como sigue:
#
# | *Partido* | Partido A | Partido B | Partido C | Partido D | Partido E |
# |:-----------|-----------:|-----------:|-----------:|-----------:|-----------:|
# | *Votos* | 340000 | 280000 | 160000 | 60000 | 15000 |
# | *Escaño 1* | **340000** | 280000 | 160000 | 60000 | 15000 |
# | *Escaño 2* | 170000 | **280000** | 160000 | 60000 | 15000 |
# | *Escaño 3* | **170000** | 140000 | 160000 | 60000 | 15000 |
# | *Escaño 4* | 113333 | 140000 | **160000** | 60000 | 15000 |
# | *Escaño 5* | 113333 | **140000** | 80000 | 60000 | 15000 |
# | *Escaño 6* | **170000** | 93333 | 80000 | 60000 | 15000 |
# | *Escaño 7* | 85000 | **93333** | 80000 | 60000 | 15000 |
#
# Así que los partidos A y B obtendrían 3 escaños, mientras que el partido C obtendría 1 único escaño, quedando el resto de partidos fuera del proceso.
def hondt(votos, n):
s = [0] * len(votos)
for i in range(n):
c = [v[j] / (s[j] + 1) for j in range(len(s))]
s[c.index(max(c))] += 1
return s
v = [340000, 280000, 160000, 60000, 15000]
n = 7
hondt(v, n)
# ---
# _En esta clase hemos visto cómo crear funciones que encapsulen tareas de nuestro programa y las hemos aplicado para respondernos ciertas preguntas sencillas._
#
# **Referencias**
#
# * Libro "Learn Python the Hard Way" http://learnpythonthehardway.org/book/
# * Python Tutor, para visualizar código Python paso a paso http://pythontutor.com/
# * Libro "How To Think Like a Computer Scientist" http://interactivepython.org/runestone/static/thinkcspy/toc.html
# * Project Euler: ejercicios para aprender Python https://projecteuler.net/problems
# * Python Challenge (!) http://www.pythonchallenge.com/
# ---
# <br/>
# #### <h4 align="right">¡Síguenos en Twitter!
# <br/>
# ###### <a href="https://twitter.com/AeroPython" class="twitter-follow-button" data-show-count="false">Follow @AeroPython</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
# <br/>
# ###### Este notebook ha sido realizado por: <NAME>, <NAME> y <NAME>
# <br/>
# ##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName"><NAME> y <NAME></span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
# ---
# _Las siguientes celdas contienen configuración del Notebook_
#
# _Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
#
# File > Trusted Notebook
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
|
notebooks_completos/004-PythonBasico-EjerciciosBuclesCondicionales.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.8 64-bit (''.venv'': venv)'
# language: python
# name: python3
# ---
import csv
import re
import pandas as pd
from pathlib import Path
import matplotlib.pyplot as plt
import math
def create_data_stucture(map_data_file):
"""Create a data structure from a data_file that allows rapid quering positions on the map
Args:
map_data_file (Path) csv file with the following columns:
city,lat,lng,country,iso3,local_name,population,continent
Returns
data_structure that can be used with find_closest_cities
HINT:
Binary space partitioning buildup using recursive functions
"""
cities_df = pd.read_csv(map_data_file)
return cities_df
def find_closest_cities(point, data_structure):
"""Find ten close cities to the given point on the world given the data structure
Args:
point (tuple of floats) latitude and longitude
data_structure (you choose) to be queries using the point
Returns:
list of 10 cities that are closest to the point
"""
distance_list = []
for lat,lng in zip(data_structure['lat'].tolist(), data_structure['lng'].tolist()) :
distance_list.append(math.dist(point,(lat,lng)))
data_structure['Distance'] = distance_list
data_structure = data_structure.sort_values(by = "Distance")
closest_10_cities = data_structure.head(10)
return closest_10_cities
point = (49.4, 8.67)
file_dir = Path("D:/EIGENE DATEIEN/Documents/!UNI/Coding/Advanced Python/advanced_python_2021-22_HD-1/data/cities.csv")
data_structure = create_data_stucture(file_dir)
print(find_closest_cities(point, data_structure))
|
Exercise/day3/Lukas_day3_boring_version.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PaddlePaddle 2.0.0b0 (Python 3.5)
# language: python
# name: py35-paddle1.2.0
# ---
# # 飞桨高层API使用指南
#
# **作者:** [PaddlePaddle](https://github.com/PaddlePaddle) <br>
# **日期:** 2021.05 <br>
# **摘要:** 本示例教程是对飞桨高层API的详细说明,会介绍如何使用高层API,快速完成深度学习任务。
# ## 一、简介
#
# 飞桨框架2.0全新推出高层API,是对飞桨API的进一步封装与升级,提供了更加简洁易用的API,进一步提升了飞桨的易学易用性,并增强飞桨的功能。
#
# 飞桨高层API面向从深度学习小白到资深开发者的所有人群,对于AI初学者来说,使用高层API可以简单快速的构建深度学习项目,对于资深开发者来说,可以快速完成算法迭代。
#
# 飞桨高层API具有以下特点:
#
# * 易学易用: 高层API是对普通动态图API的进一步封装和优化,同时保持与普通API的兼容性,高层API使用更加易学易用,同样的实现使用高层API可以节省大量的代码。
# * 低代码开发: 使用飞桨高层API的一个明显特点是编程代码量大大缩减。
# * 动静转换: 高层API支持动静转换,只需要改一行代码即可实现将动态图代码在静态图模式下训练,既方便使用动态图调试模型,又提升了模型训练效率。
#
# 在功能增强与使用方式上,高层API有以下升级:
#
# * 模型训练方式升级: 高层API中封装了Model类,继承了Model类的神经网络可以仅用几行代码完成模型的训练。
# * 新增图像处理模块transform: 飞桨新增了图像预处理模块,其中包含数十种数据处理函数,基本涵盖了常用的数据处理、数据增强方法。
# * 提供常用的神经网络模型可供调用: 高层API中集成了计算机视觉领域和自然语言处理领域常用模型,包括但不限于mobilenet、resnet、yolov3、cyclegan、bert、transformer、seq2seq等等。同时发布了对应模型的预训练模型,可以直接使用这些模型或者在此基础上完成二次开发。
#
# ## 二、安装并使用飞桨高层API
#
# 飞桨高层API无需独立安装,只需要安装好paddlepaddle即可。如果你的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.1 。
#
# 安装完成后import paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.vision、NLP领域paddle.text。
# +
import paddle
import paddle.vision as vision
import paddle.text as text
paddle.__version__
# -
# ## 三、目录
#
# 本指南教学内容覆盖
#
# * 使用高层API提供的自带数据集进行相关深度学习任务训练。
# * 使用自定义数据进行数据集的定义、数据预处理和训练。
# * 如何在数据集定义和加载中应用数据增强相关接口。
# * 如何进行模型的组网。
# * 高层API进行模型训练的相关API使用。
# * 如何在fit接口满足需求的时候进行自定义,使用基础API来完成训练。
# * 如何使用多卡来加速训练。
# ## 四、数据集定义、加载和数据预处理
#
# 对于深度学习任务,均是框架针对各种类型数字的计算,是无法直接使用原始图片和文本等文件来完成。那么就是涉及到了一项动作,就是将原始的各种数据文件进行处理加工,转换成深度学习任务可以使用的数据。
#
# ### 4.1 框架自带数据集使用
#
# 高层API将一些常用到的数据集作为领域API,对应API所在目录为`paddle.vision.datasets`,那么先看下提供了哪些数据集。
print('视觉相关数据集:', paddle.vision.datasets.__all__)
print('自然语言相关数据集:', paddle.text.__all__)
# 这里加载一个手写数字识别的数据集,用`mode`来标识是训练数据还是测试数据集。数据集接口会自动从远端下载数据集到本机缓存目录`~/.cache/paddle/dataset`。
# +
from paddle.vision.transforms import ToTensor
# 训练数据集
train_dataset = vision.datasets.MNIST(mode='train', transform=ToTensor())
# 验证数据集
val_dataset = vision.datasets.MNIST(mode='test', transform=ToTensor())
# -
# ### 4.2 自定义数据集
#
# 更多的时候需要自己使用已有的相关数据来定义数据集,那么这里通过一个案例来了解如何进行数据集的定义,飞桨提供了`paddle.io.Dataset`基类,通过类的集成来快速实现数据集定义。
# +
from paddle.io import Dataset
class MyDataset(Dataset):
"""
步骤一:继承paddle.io.Dataset类
"""
def __init__(self, mode='train'):
"""
步骤二:实现构造函数,定义数据读取方式,划分训练和测试数据集
"""
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
def __getitem__(self, index):
"""
步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)
"""
data = self.data[index][0]
label = self.data[index][1]
return data, label
def __len__(self):
"""
步骤四:实现__len__方法,返回数据集总数目
"""
return len(self.data)
# 测试定义的数据集
train_dataset_2 = MyDataset(mode='train')
val_dataset_2 = MyDataset(mode='test')
print('=============train dataset=============')
for data, label in train_dataset_2:
print(data, label)
print('=============evaluation dataset=============')
for data, label in val_dataset_2:
print(data, label)
# -
# ### 4.3 数据增强
#
# 训练过程中有时会遇到过拟合的问题,其中一个解决方法就是对训练数据做增强,对数据进行处理得到不同的图像,从而泛化数据集。数据增强API是定义在领域目录的transofrms下,这里介绍两种使用方式,一种是基于框架自带数据集,一种是基于自己定义的数据集。
#
# #### 4.3.1 框架自带数据集
# +
from paddle.vision.transforms import Compose, Resize, ColorJitter
# 定义想要使用那些数据增强方式,这里用到了随机调整亮度、对比度和饱和度,改变图片大小
transform = Compose([ColorJitter(), Resize(size=100)])
# 通过transform参数传递定义好的数据增项方法即可完成对自带数据集的应用
train_dataset_3 = vision.datasets.MNIST(mode='train', transform=transform)
# -
# #### 4.3.2 自定义数据集
#
# 针对自定义数据集使用数据增强有两种方式,一种是在数据集的构造函数中进行数据增强方法的定义,之后对__getitem__中返回的数据进行应用。另外一种方式也可以给自定义的数据集类暴漏一个构造参数,在实例化类的时候将数据增强方法传递进去。
# +
from paddle.io import Dataset
class MyDataset(Dataset):
def __init__(self, mode='train'):
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
# 定义要使用的数据预处理方法,针对图片的操作
self.transform = Compose([ColorJitter(), Resize(size=100)])
def __getitem__(self, index):
data = self.data[index][0]
# 在这里对训练数据进行应用
# 这里只是一个示例,测试时需要将数据集更换为图片数据进行测试
data = self.transform(data)
label = self.data[index][1]
return data, label
def __len__(self):
return len(self.data)
# -
# ## 五、模型组网
#
# 针对高层API在模型组网上和基础API是统一的一套,无需投入额外的学习使用成本。那么这里我举几个简单的例子来做示例。
#
# ### 5.1 Sequential组网
#
# 针对顺序的线性网络结构可以直接使用Sequential来快速完成组网,可以减少类的定义等代码编写。
# Sequential形式组网
mnist = paddle.nn.Sequential(
paddle.nn.Flatten(),
paddle.nn.Linear(784, 512),
paddle.nn.ReLU(),
paddle.nn.Dropout(0.2),
paddle.nn.Linear(512, 10)
)
# ### 5.2 SubClass组网
#
# 针对一些比较复杂的网络结构,就可以使用Layer子类定义的方式来进行模型代码编写,在`__init__`构造函数中进行组网Layer的声明,在`forward`中使用声明的Layer变量进行前向计算。子类组网方式也可以实现sublayer的复用,针对相同的layer可以在构造函数中一次性定义,在forward中多次调用。
# +
# Layer类继承方式组网
class Mnist(paddle.nn.Layer):
def __init__(self):
super(Mnist, self).__init__()
self.flatten = paddle.nn.Flatten()
self.linear_1 = paddle.nn.Linear(784, 512)
self.linear_2 = paddle.nn.Linear(512, 10)
self.relu = paddle.nn.ReLU()
self.dropout = paddle.nn.Dropout(0.2)
def forward(self, inputs):
y = self.flatten(inputs)
y = self.linear_1(y)
y = self.relu(y)
y = self.dropout(y)
y = self.linear_2(y)
return y
mnist_2 = Mnist()
# -
# ### 5.3 模型封装
#
# 定义好网络结构之后来使用`paddle.Model`完成模型的封装,将网络结构组合成一个可快速使用高层API进行训练、评估和预测的类。
#
# 在封装的时候有两种场景,动态图训练模式和静态图训练模式。
# +
# 使用GPU训练
# paddle.set_device('gpu')
# 模型封装
## 场景1:动态图模式
## 1.1 为模型预测部署场景进行模型训练
## 需要添加input和label数据描述,否则会导致使用model.save(training=False)保存的预测模型在使用时出错
inputs = paddle.static.InputSpec([-1, 1, 28, 28], dtype='float32', name='input')
label = paddle.static.InputSpec([-1, 1], dtype='int8', name='label')
model = paddle.Model(mnist, inputs, label)
## 1.2 面向实验而进行的模型训练
## 可以不传递input和label信息
# model = paddle.Model(mnist)
## 场景2:静态图模式
# paddle.enable_static()
# paddle.set_device('gpu')
# input = paddle.static.InputSpec([None, 1, 28, 28], dtype='float32')
# label = paddle.static.InputSpec([None, 1], dtype='int8')
# model = paddle.Model(mnist, input, label)
# -
# ### 5.4 模型可视化
#
# 在组建好网络结构后,一般会想去对网络结构进行一下可视化,逐层的去对齐一下网络结构参数,看看是否符合预期。这里可以通过`Model.summary`接口进行可视化展示。
model.summary((1, 28, 28))
# 另外,summary接口有两种使用方式,下面通过两个示例来做展示,除了`Model.summary`这种配套`paddle.Model`封装使用的接口外,还有一套配合没有经过`paddle.Model`封装的方式来使用。可以直接将实例化好的Layer子类放到`paddle.summary`接口中进行可视化呈现。
paddle.summary(mnist, (1, 28, 28))
# 这里面有一个注意的点,有些读者可能会疑惑为什么要传递`(1, 28, 28)`这个input_size参数,因为在动态图中,网络定义阶段是还没有得到输入数据的形状信息,想要做网络结构的呈现就无从下手,那么通过告知接口网络结构的输入数据形状,这样网络可以通过逐层的计算推导得到完整的网络结构信息进行呈现。如果是动态图运行模式,那么就不需要给summary接口传递输入数据形状这个值了,因为在Model封装的时候已经定义好了InputSpec,其中包含了输入数据的形状格式。
# ## 六、模型训练
#
# 网络结构通过`paddle.Model`接口封装成模型类后进行执行操作非常的简洁方便,可以直接通过调用`Model.fit`就可以完成训练过程。
#
# 使用`Model.fit`接口启动训练前,先通过`Model.prepare`接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
# 做好模型训练的前期准备工作后,正式调用`fit()`接口来启动训练过程,需要指定一下至少3个关键参数:训练数据集,训练轮次和单次训练数据批次大小。
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=5,
batch_size=64,
verbose=1)
# **注:**
#
# `fit()`的第一个参数不仅可以传递数据集`paddle.io.Dataset`,还可以传递DataLoader,如果想要实现某个自定义的数据集抽样等逻辑,可以在fit外自定义DataLoader,然后传递给fit函数。
#
# ```python
# train_dataloader = paddle.io.DataLoader(train_dataset)
# ...
# model.fit(train_dataloader, ...)
# ```
# ### 6.1 单机单卡
#
# 把刚才单步教学的训练代码做一个整合,这个完整的代码示例就是单机单卡训练程序。
# +
# 使用GPU训练
# paddle.set_device('gpu')
# 构建模型训练用的Model,告知需要训练哪个模型
model = paddle.Model(mnist)
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=5,
batch_size=64,
verbose=1)
# -
# ### 6.2 单机多卡
#
# 对于高层API来实现单机多卡非常简单,整个训练代码和单机单卡没有差异。直接使用`paddle.distributed.launch`启动单机单卡的程序即可。
#
# ```bash
# $ python -m paddle.distributed.launch train.py
# ```
#
# train.py里面包含的就是单机单卡代码
# ### 6.3 自定义Loss
#
# 有时会遇到特定任务的Loss计算方式在框架既有的Loss接口中不存在,或算法不符合自己的需求,那么期望能够自己来进行Loss的自定义,这里就会讲解介绍一下如何进行Loss的自定义操作,首先来看下面的代码:
#
# ```python
# class SelfDefineLoss(paddle.nn.Layer):
# """
# 1. 继承paddle.nn.Layer
# """
# def __init__(self):
# """
# 2. 构造函数根据自己的实际算法需求和使用需求进行参数定义即可
# """
# super(SelfDefineLoss, self).__init__()
#
# def forward(self, input, label):
# """
# 3. 实现forward函数,forward在调用时会传递两个参数:input和label
# - input:单个或批次训练数据经过模型前向计算输出结果
# - label:单个或批次训练数据对应的标签数据
#
# 接口返回值是一个Tensor,根据自定义的逻辑加和或计算均值后的损失
# """
# # 使用Paddle中相关API自定义的计算逻辑
# # output = xxxxx
# # return output
# ```
#
# 那么了解完代码层面如果编写自定义代码后看一个实际的例子,下面是在图像分割示例代码中写的一个自定义Loss,当时主要是想使用自定义的softmax计算维度。
#
# ```python
# class SoftmaxWithCrossEntropy(paddle.nn.Layer):
# def __init__(self):
# super(SoftmaxWithCrossEntropy, self).__init__()
#
# def forward(self, input, label):
# loss = F.softmax_with_cross_entropy(input,
# label,
# return_softmax=False,
# axis=1)
# return paddle.mean(loss)
# ```
# ### 6.4 自定义Metric
#
# 和Loss一样,如果遇到一些想要做个性化实现的操作时,也可以来通过框架完成自定义的评估计算方法,具体的实现方式如下:
#
# ```python
# class SelfDefineMetric(paddle.metric.Metric):
# """
# 1. 继承paddle.metric.Metric
# """
# def __init__(self):
# """
# 2. 构造函数实现,自定义参数即可
# """
# super(SelfDefineMetric, self).__init__()
#
# def name(self):
# """
# 3. 实现name方法,返回定义的评估指标名字
# """
# return '自定义评价指标的名字'
#
# def compute(self, ...)
# """
# 4. 本步骤可以省略,实现compute方法,这个方法主要用于`update`的加速,可以在这个方法中调用一些paddle实现好的Tensor计算API,编译到模型网络中一起使用低层C++ OP计算。
# """
#
# return 自己想要返回的数据,会做为update的参数传入。
#
# def update(self, ...):
# """
# 5. 实现update方法,用于单个batch训练时进行评估指标计算。
# - 当`compute`类函数未实现时,会将模型的计算输出和标签数据的展平作为`update`的参数传入。
# - 当`compute`类函数做了实现时,会将compute的返回结果作为`update`的参数传入。
# """
# return acc value
#
# def accumulate(self):
# """
# 6. 实现accumulate方法,返回历史batch训练积累后计算得到的评价指标值。
# 每次`update`调用时进行数据积累,`accumulate`计算时对积累的所有数据进行计算并返回。
# 结算结果会在`fit`接口的训练日志中呈现。
# """
# # 利用update中积累的成员变量数据进行计算后返回
# return accumulated acc value
#
# def reset(self):
# """
# 7. 实现reset方法,每个Epoch结束后进行评估指标的重置,这样下个Epoch可以重新进行计算。
# """
# # do reset action
# ```
#
# 看一个框架中的具体例子,这个是框架中已提供的一个评估指标计算接口,这里就是按照上述说明中的实现方法进行了相关类继承和成员函数实现。
#
# ```python
# from paddle.metric import Metric
#
#
# class Precision(Metric):
# """
# Precision (also called positive predictive value) is the fraction of
# relevant instances among the retrieved instances. Refer to
# https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers
#
# Noted that this class manages the precision score only for binary
# classification task.
#
# ......
#
# """
#
# def __init__(self, name='precision', *args, **kwargs):
# super(Precision, self).__init__(*args, **kwargs)
# self.tp = 0 # true positive
# self.fp = 0 # false positive
# self._name = name
#
# def update(self, preds, labels):
# """
# Update the states based on the current mini-batch prediction results.
#
# Args:
# preds (numpy.ndarray): The prediction result, usually the output
# of two-class sigmoid function. It should be a vector (column
# vector or row vector) with data type: 'float64' or 'float32'.
# labels (numpy.ndarray): The ground truth (labels),
# the shape should keep the same as preds.
# The data type is 'int32' or 'int64'.
# """
# if isinstance(preds, paddle.Tensor):
# preds = preds.numpy()
# elif not _is_numpy_(preds):
# raise ValueError("The 'preds' must be a numpy ndarray or Tensor.")
#
# if isinstance(labels, paddle.Tensor):
# labels = labels.numpy()
# elif not _is_numpy_(labels):
# raise ValueError("The 'labels' must be a numpy ndarray or Tensor.")
#
# sample_num = labels.shape[0]
# preds = np.floor(preds + 0.5).astype("int32")
#
# for i in range(sample_num):
# pred = preds[i]
# label = labels[i]
# if pred == 1:
# if pred == label:
# self.tp += 1
# else:
# self.fp += 1
#
# def reset(self):
# """
# Resets all of the metric state.
# """
# self.tp = 0
# self.fp = 0
#
# def accumulate(self):
# """
# Calculate the final precision.
#
# Returns:
# A scaler float: results of the calculated precision.
# """
# ap = self.tp + self.fp
# return float(self.tp) / ap if ap != 0 else .0
#
# def name(self):
# """
# Returns metric name
# """
# return self._name
# ```
# ### 6.5 自定义Callback
#
# `fit`接口的callback参数支持传一个Callback类实例,用来在每轮训练和每个batch训练前后进行调用,可以通过callback收集到训练过程中的一些数据和参数,或者实现一些自定义操作。
#
# ```python
# class SelfDefineCallback(paddle.callbacks.Callback):
# """
# 1. 继承paddle.callbacks.Callback
# 2. 按照自己的需求实现以下类成员方法:
# def on_train_begin(self, logs=None) 训练开始前,`Model.fit`接口中调用
# def on_train_end(self, logs=None) 训练结束后,`Model.fit`接口中调用
# def on_eval_begin(self, logs=None) 评估开始前,`Model.evaluate`接口调用
# def on_eval_end(self, logs=None) 评估结束后,`Model.evaluate`接口调用
# def on_test_begin(self, logs=None) 预测测试开始前,`Model.predict`接口中调用
# def on_test_end(self, logs=None) 预测测试结束后,`Model.predict`接口中调用
# def on_epoch_begin(self, epoch, logs=None) 每轮训练开始前,`Model.fit`接口中调用
# def on_epoch_end(self, epoch, logs=None) 每轮训练结束后,`Model.fit`接口中调用
# def on_train_batch_begin(self, step, logs=None) 单个Batch训练开始前,`Model.fit`和`Model.train_batch`接口中调用
# def on_train_batch_end(self, step, logs=None) 单个Batch训练结束后,`Model.fit`和`Model.train_batch`接口中调用
# def on_eval_batch_begin(self, step, logs=None) 单个Batch评估开始前,`Model.evalute`和`Model.eval_batch`接口中调用
# def on_eval_batch_end(self, step, logs=None) 单个Batch评估结束后,`Model.evalute`和`Model.eval_batch`接口中调用
# def on_test_batch_begin(self, step, logs=None) 单个Batch预测测试开始前,`Model.predict`和`Model.test_batch`接口中调用
# def on_test_batch_end(self, step, logs=None) 单个Batch预测测试结束后,`Model.predict`和`Model.test_batch`接口中调用
# """
# def __init__(self):
# super(SelfDefineCallback, self).__init__()
#
# 按照需求定义自己的类成员方法
# ```
#
# 看一个框架中的实际例子,这是一个框架自带的ModelCheckpoint回调函数,方便在fit训练模型时自动存储每轮训练得到的模型。
#
# ```python
# class ModelCheckpoint(Callback):
# def __init__(self, save_freq=1, save_dir=None):
# self.save_freq = save_freq
# self.save_dir = save_dir
#
# def on_epoch_begin(self, epoch=None, logs=None):
# self.epoch = epoch
#
# def _is_save(self):
# return self.model and self.save_dir and ParallelEnv().local_rank == 0
#
# def on_epoch_end(self, epoch, logs=None):
# if self._is_save() and self.epoch % self.save_freq == 0:
# path = '{}/{}'.format(self.save_dir, epoch)
# print('save checkpoint at {}'.format(os.path.abspath(path)))
# self.model.save(path)
#
# def on_train_end(self, logs=None):
# if self._is_save():
# path = '{}/final'.format(self.save_dir)
# print('save checkpoint at {}'.format(os.path.abspath(path)))
# self.model.save(path)
#
# ```
# ## 七、模型评估
#
# 对于训练好的模型进行评估操作可以使用`evaluate`接口来实现,事先定义好用于评估使用的数据集后,可以简单的调用`evaluate`接口即可完成模型评估操作,结束后根据prepare中loss和metric的定义来进行相关评估结果计算返回。
#
# 返回格式是一个字典:
# * 只包含loss,`{'loss': xxx}`
# * 包含loss和一个评估指标,`{'loss': xxx, 'metric name': xxx}`
# * 包含loss和多个评估指标,`{'loss': xxx, 'metric name': xxx, 'metric name': xxx}`
result = model.evaluate(val_dataset, verbose=1)
# ## 八、模型预测
#
# 高层API中提供了`predict`接口来方便对训练好的模型进行预测验证,只需要基于训练好的模型将需要进行预测测试的数据放到接口中进行计算即可,接口会将经过模型计算得到的预测结果进行返回。
#
# 返回格式是一个list,元素数目对应模型的输出数目:
# * 模型是单一输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n)]
# * 模型是多输出:[(numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), (numpy_ndarray_1, numpy_ndarray_2, ..., numpy_ndarray_n), ...]
#
# numpy_ndarray_n是对应原始数据经过模型计算后得到的预测数据,数目对应预测数据集的数目。
pred_result = model.predict(val_dataset)
# ### 8.1 使用多卡进行预测
#
# 有时需要进行预测验证的数据较多,单卡无法满足时间诉求,那么`predict`接口也支持实现了使用多卡模式来运行。
#
# 使用起来也是超级简单,无需修改代码程序,只需要使用launch来启动对应的预测脚本即可。
#
# ```bash
# $ python3 -m paddle.distributed.launch infer.py
# ```
#
# infer.py里面就是包含model.predict的代码程序。
# ## 九、模型部署
#
# ### 9.1 模型存储
#
# 模型训练和验证达到预期后,可以使用`save`接口来将模型保存下来,用于后续模型的Fine-tuning(接口参数training=True)或推理部署(接口参数training=False)。
#
# 需要注意的是,在动态图模式训练时保存推理模型的参数文件和模型文件,需要在forward成员函数上添加@paddle.jit.to_static装饰器,参考下面的例子:
#
# ```python
# class Mnist(paddle.nn.Layer):
# def __init__(self):
# super(Mnist, self).__init__()
#
# self.flatten = paddle.nn.Flatten()
# self.linear_1 = paddle.nn.Linear(784, 512)
# self.linear_2 = paddle.nn.Linear(512, 10)
# self.relu = paddle.nn.ReLU()
# self.dropout = paddle.nn.Dropout(0.2)
#
# @paddle.jit.to_static
# def forward(self, inputs):
# y = self.flatten(inputs)
# y = self.linear_1(y)
# y = self.relu(y)
# y = self.dropout(y)
# y = self.linear_2(y)
#
# return y
# ```
model.save('~/model/mnist')
# ### 9.2 预测部署
# 有了用于推理部署的模型,就可以使用推理部署框架来完成预测服务部署,具体可以参见:[预测部署](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/05_inference_deployment/index_cn.html), 包括服务端部署、移动端部署和模型压缩。
|
docs/practices/high_level_api/high_level_api.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('build')
# language: python
# name: python3
# ---
# # Evaluation of Seq2Seq Models
# +
import transformers
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, LogitsProcessorList, MinLengthLogitsProcessor, TopKLogitsWarper, TemperatureLogitsWarper, BeamSearchScorer
import torch
import datasets
import pickle
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
# -
# ### Setup & Helper Functions
# +
tokenizer = AutoTokenizer.from_pretrained("microsoft/codebert-base")
def tokenize_function(set):
inputs = tokenizer(set["code"], max_length=512, padding="max_length", truncation=True, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(set["docstring"], max_length=512, padding="max_length", truncation=True, return_tensors="pt")
inputs["labels"] = labels["input_ids"]
return inputs
# -
bleu = datasets.load_metric('sacrebleu')
rouge = datasets.load_metric('rouge')
meteor = datasets.load_metric('meteor')
import numpy as np
class testWrapper():
def __init__(self, model):
self.model = model.cuda()
self.beam_scorer = BeamSearchScorer(
batch_size=4,
max_length=self.model.config.max_length,
num_beams=4,
device=self.model.device,
)
self.logits_processor = LogitsProcessorList(
[MinLengthLogitsProcessor(5, eos_token_id=self.model.config.eos_token_id)]
)
self.logits_warper = LogitsProcessorList(
[
TopKLogitsWarper(50),
TemperatureLogitsWarper(0.7),
]
)
input_ids = torch.ones((4, 1), device=self.model.device, dtype=torch.long)
self.input_ids = input_ids * self.model.config.decoder_start_token_id
def generate_string(self, batch):
inputs = tokenizer(batch["code"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.cuda()
attention_mask = inputs.attention_mask.cuda()
outputs = self.model.generate(input_ids, attention_mask=attention_mask)
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
batch["pred_string"] = output_str
return batch
def generate_per_string(self, batch):
inputs = tokenizer(batch["code"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.cuda()
attention_mask = inputs.attention_mask.cuda()
outputs = self.model.generate(input_ids, attention_mask=attention_mask)
output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
batch["pred_string"] = output_str
predictions = output_str
references = [batch["docstring"]]
rouge_output = rouge.compute(predictions=predictions, references=references, rouge_types=["rouge2"])["rouge2"].mid
bleu_output = bleu.compute(predictions=predictions, references=[[ref] for ref in references])
meteor_output = meteor.compute(predictions=predictions, references=references)
batch["rouge2_precision"] = round(rouge_output.precision, 4)
batch["rouge2_recall"] = round(rouge_output.recall, 4)
batch["rouge2_fmeasure"] = round(rouge_output.fmeasure, 4)
batch["bleu_score"] = bleu_output["score"]
batch["meteor_score"] = meteor_output["meteor"]
return batch
def test_gen(self, batch):
encoder_input_ids = tokenizer(batch['code'], padding="max_length", truncation=True, max_length=512, return_tensors="pt").input_ids
model_kwargs = {
"encoder_outputs": self.model.get_encoder()(
encoder_input_ids.repeat_interleave(4, dim=0), return_dict=True
)
}
outputs = self.model.beam_sample(
self.input_ids, self.beam_scorer, logits_processor=self.logits_processor, logits_warper=self.logits_warper, **model_kwargs
)
batch['pred_string'] = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return batch
def eval_compute(results):
predictions=results["pred_string"]
references=results["docstring"]
rouge_output = rouge.compute(predictions=predictions, references=references, rouge_types=["rouge2"])["rouge2"].mid
bleu_output = bleu.compute(predictions=predictions, references=[[ref] for ref in references])
meteor_output = meteor.compute(predictions=predictions, references=references)
return {
"rouge2_precision": round(rouge_output.precision, 4),
"rouge2_recall": round(rouge_output.recall, 4),
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
"bleu_score" : bleu_output["score"],
"meteor_score" : meteor_output["meteor"]
}
def modelSetup(path):
model = AutoModelForSeq2SeqLM.from_pretrained(path)
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.eos_token_id = tokenizer.sep_token_id
model.config.pad_token_id = tokenizer.pad_token_id
model.config.vocab_size = model.config.encoder.vocab_size
model.config.num_beams = 4
return model
def ttest(delta, N):
deg_free = N - 1
d_sq = delta ** 2
t = (np.sum(delta)/N) / np.sqrt((np.sum(d_sq) - ((np.sum(delta)**2) / N)) / ((N - 1) * N))
return t, p
# # Test Generation
# ## No Augmentation
test_set = datasets.load_dataset('json', data_files="D:\\PROJECT\\data\\CodeSearchNet\\py_clean\\test.jsonl")["train"]
frame_test = test_set.to_pandas()
frame_test
frame_test.loc[6100]
# +
import pickle
with open("D:\\PROJECT\\out\\original\\small\\results.pkl", 'rb') as f:
frame = pickle.load(f)
# -
frame
small_path = "D:\PROJECT\out\original\small\model_out"
medium_path = "D:\PROJECT\out\original\medium\model_out"
small_model = modelSetup(small_path)
medium_model = modelSetup(medium_path)
small_tester = testWrapper(small_model)
medium_tester = testWrapper(medium_model)
small_res = test_set.map(small_tester.generate_string, batched=True, batch_size=8)
small_res
medium_res = test_set.map(medium_tester.generate_string, batched=True, batch_size=8)
medium_per_scores = eval_compute(medium_res)
medium_per_scores
del small_res
small_per_scores = eval_compute(small_res)
small_per_scores
medium_per_scores = eval_compute(medium_res)
with open("D:\\PROJECT\\out\\original\\medium\\per_scores.pkl", 'wb') as f:
pickle.dump(medium_per_scores, f)
with open("D:\\PROJECT\\out\\original\\medium\\per_scores.pkl", 'rb') as f:
print(pickle.load(f))
with open("D:\\PROJECT\\out\\original\\small\\per_scores.pkl", 'rb') as f:
print(pickle.load(f))
medium_per_res = test_set.map(medium_tester.generate_per_string, batched=False)
small_per_res = test_set.map(small_tester.generate_per_string, batched=False)
with open("D:\\PROJECT\\out\\original\\small\\per_res.pkl", 'wb') as f:
pickle.dump(small_per_res, f)
with open("D:\\PROJECT\\out\\original\\medium\\per_res.pkl", 'wb') as f:
pickle.dump(medium_per_res, f)
# +
small_per_scores = eval_compute(small_res)
medium_per_scores = eval_compute(medium_res)
# +
#medium_scores
# +
#small_scores
# +
input_str = "list = ['x', 'y', 'z']"
inputs = tokenizer([input_str], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.cuda()
attention_mask = inputs.attention_mask.cuda()
outputs_orig = medium_model.generate(input_ids, attention_mask=attention_mask)
outputs = medium_aug_model.cuda().generate(input_ids, attention_mask=attention_mask)
output_str = tokenizer.batch_decode(outputs_orig, skip_special_tokens=True)
output_aug_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(output_str, output_aug_str)
# -
# ## Augmentation
with open("D:\\PROJECT\\out\\aug\\medium\\per_sentence_res.pkl", 'rb') as f:
frame = pickle.load(f).to_pandas()
print(frame[frame.bleu_score < 1.0].iloc[1790].pred_string)
small_aug_path = "D:\\PROJECT\\out\\aug\small\\model_out"
medium_aug_path = "D:\\PROJECT\\out\\aug\medium\\model_out"
small_aug_model = modelSetup(small_aug_path)
medium_aug_model = modelSetup(medium_aug_path)
small_aug_tester = testWrapper(small_aug_model)
medium_aug_tester = testWrapper(medium_aug_model)
medium_aug_res = test_set.map(medium_aug_tester.generate_string, batched=True, batch_size=8)
small_aug_res = test_set.map(small_aug_tester.generate_string, batched=True, batch_size=8)
medium_aug_scores = eval_compute(medium_aug_res)
small_aug_scores = eval_compute(small_aug_res)
medium_aug_scores
small_aug_scores
del medium_aug_scores
del small_aug_scores
medium_aug_per_res = test_set.map(medium_aug_tester.generate_per_string, batched=False)
with open("D:\\PROJECT\\out\\aug\\medium\\per_sentence_res.pkl", 'wb') as f:
pickle.dump(medium_aug_per_res, f)
small_aug_per_res = test_set.map(small_aug_tester.generate_per_string, batched=False)
with open("D:\\PROJECT\\out\\aug\\small\\per_sentence_res.pkl", 'wb') as f:
pickle.dump(small_aug_per_res, f)
docstrings = test_set.to_pandas()['docstring']
aug_doc = medium_aug_per_res.to_pandas()['docstring']
docstrings
avg_orig = np.mean(docstrings.apply(lambda x : len(x)))
avg_aug = np.mean(aug_doc.apply(lambda x : len(x)))
avg_aug
|
notebooks/evaluation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
FN = 'train'
# you should use GPU but if it is busy then you always can fall back to your CPU
import os
# os.environ['THEANO_FLAGS'] = 'device=cpu,floatX=float32'
import keras
keras.__version__
# Use indexing of tokens from [vocabulary-embedding](./vocabulary-embedding.ipynb) this does not clip the indexes of the words to `vocab_size`.
#
# Use the index of outside words to replace them with several `oov` words (`oov` , `oov0`, `oov1`, ...) that appear in the same description and headline. This will allow headline generator to replace the oov with the same word in the description
FN0 = 'vocabulary-embedding'
# implement the "simple" model from http://arxiv.org/pdf/1512.01712v1.pdf
# you can start training from a pre-existing model. This allows you to run this notebooks many times, each time using different parameters and passing the end result of one run to be the input of the next.
#
# I've started with `maxlend=0` (see below) in which the description was ignored. I then moved to start with a high `LR` and the manually lowering it. I also started with `nflips=0` in which the original headlines is used as-is and slowely moved to `12` in which half the input headline was fliped with the predictions made by the model (the paper used fixed 10%)
FN1 = 'train'
# input data (`X`) is made from `maxlend` description words followed by `eos`
# followed by headline words followed by `eos`
# if description is shorter than `maxlend` it will be left padded with `empty`
# if entire data is longer than `maxlen` it will be clipped and if it is shorter it will be right padded with empty.
#
# labels (`Y`) are the headline words followed by `eos` and clipped or padded to `maxlenh`
#
# In other words the input is made from a `maxlend` half in which the description is padded from the left
# and a `maxlenh` half in which `eos` is followed by a headline followed by another `eos` if there is enough space.
#
# The labels match only the second half and
# the first label matches the `eos` at the start of the second half (following the description in the first half)
maxlend=25 # 0 - if we dont want to use description at all
maxlenh=25
maxlen = maxlend + maxlenh
rnn_size = 512 # must be same as 160330-word-gen
rnn_layers = 3 # match FN1
batch_norm=False
# the out of the first `activation_rnn_size` nodes from the top LSTM layer will be used for activation and the rest will be used to select predicted word
activation_rnn_size = 40 if maxlend else 0
# training parameters
seed=42
p_W, p_U, p_dense, p_emb, weight_decay = 0, 0, 0, 0, 0
optimizer = 'adam'
LR = 1e-4
batch_size=64
nflips=10
nb_train_samples = 30000
nb_val_samples = 3000
# # read word embedding
# +
import cPickle as pickle
with open('data/%s.pkl'%FN0, 'rb') as fp:
embedding, idx2word, word2idx, glove_idx2idx = pickle.load(fp)
vocab_size, embedding_size = embedding.shape
# -
with open('data/%s.data.pkl'%FN0, 'rb') as fp:
X, Y = pickle.load(fp)
nb_unknown_words = 10
print 'number of examples',len(X),len(Y)
print 'dimension of embedding space for words',embedding_size
print 'vocabulary size', vocab_size, 'the last %d words can be used as place holders for unknown/oov words'%nb_unknown_words
print 'total number of different words',len(idx2word), len(word2idx)
print 'number of words outside vocabulary which we can substitue using glove similarity', len(glove_idx2idx)
print 'number of words that will be regarded as unknonw(unk)/out-of-vocabulary(oov)',len(idx2word)-vocab_size-len(glove_idx2idx)
for i in range(nb_unknown_words):
idx2word[vocab_size-1-i] = '<%d>'%i
# when printing mark words outside vocabulary with `^` at their end
oov0 = vocab_size-nb_unknown_words
for i in range(oov0, len(idx2word)):
idx2word[i] = idx2word[i]+'^'
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=nb_val_samples, random_state=seed)
len(X_train), len(Y_train), len(X_test), len(Y_test)
del X
del Y
empty = 0
eos = 1
idx2word[empty] = '_'
idx2word[eos] = '~'
import numpy as np
from keras.preprocessing import sequence
from keras.utils import np_utils
import random, sys
def prt(label, x):
print label+':',
for w in x:
print idx2word[w],
print
i = 334
prt('H',Y_train[i])
prt('D',X_train[i])
i = 334
prt('H',Y_test[i])
prt('D',X_test[i])
# # Model
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout, RepeatVector, Merge
from keras.layers.wrappers import TimeDistributed
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
from keras.regularizers import l2
# seed weight initialization
random.seed(seed)
np.random.seed(seed)
regularizer = l2(weight_decay) if weight_decay else None
# start with a standard stacked LSTM
model = Sequential()
model.add(Embedding(vocab_size, embedding_size,
input_length=maxlen,
W_regularizer=regularizer, dropout=p_emb, weights=[embedding], mask_zero=True,
name='embedding_1'))
for i in range(rnn_layers):
lstm = LSTM(rnn_size, return_sequences=True, # batch_norm=batch_norm,
W_regularizer=regularizer, U_regularizer=regularizer,
b_regularizer=regularizer, dropout_W=p_W, dropout_U=p_U,
name='lstm_%d'%(i+1)
)
model.add(lstm)
model.add(Dropout(p_dense,name='dropout_%d'%(i+1)))
# A special layer that reduces the input just to its headline part (second half).
# For each word in this part it concatenate the output of the previous layer (RNN)
# with a weighted average of the outputs of the description part.
# In this only the last `rnn_size - activation_rnn_size` are used from each output.
# The first `activation_rnn_size` output is used to computer the weights for the averaging.
# +
from keras.layers.core import Lambda
import keras.backend as K
def simple_context(X, mask, n=activation_rnn_size, maxlend=maxlend, maxlenh=maxlenh):
desc, head = X[:,:maxlend,:], X[:,maxlend:,:]
head_activations, head_words = head[:,:,:n], head[:,:,n:]
desc_activations, desc_words = desc[:,:,:n], desc[:,:,n:]
# RTFM http://deeplearning.net/software/theano/library/tensor/basic.html#theano.tensor.batched_tensordot
# activation for every head word and every desc word
activation_energies = K.batch_dot(head_activations, desc_activations, axes=(2,2))
# make sure we dont use description words that are masked out
activation_energies = activation_energies + -1e20*K.expand_dims(1.-K.cast(mask[:, :maxlend],'float32'),1)
# for every head word compute weights for every desc word
activation_energies = K.reshape(activation_energies,(-1,maxlend))
activation_weights = K.softmax(activation_energies)
activation_weights = K.reshape(activation_weights,(-1,maxlenh,maxlend))
# for every head word compute weighted average of desc words
desc_avg_word = K.batch_dot(activation_weights, desc_words, axes=(2,1))
return K.concatenate((desc_avg_word, head_words))
class SimpleContext(Lambda):
def __init__(self,**kwargs):
super(SimpleContext, self).__init__(simple_context,**kwargs)
self.supports_masking = True
def compute_mask(self, input, input_mask=None):
return input_mask[:, maxlend:]
def get_output_shape_for(self, input_shape):
nb_samples = input_shape[0]
n = 2*(rnn_size - activation_rnn_size)
return (nb_samples, maxlenh, n)
# -
if activation_rnn_size:
model.add(SimpleContext(name='simplecontext_1'))
model.add(TimeDistributed(Dense(vocab_size,
W_regularizer=regularizer, b_regularizer=regularizer,
name = 'timedistributed_1')))
model.add(Activation('softmax', name='activation_1'))
from keras.optimizers import Adam, RMSprop # usually I prefer Adam but article used rmsprop
# opt = Adam(lr=LR) # keep calm and reduce learning rate
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
# + language="javascript"
# // new Audio("http://www.soundjay.com/button/beep-09.wav").play ()
# -
K.set_value(model.optimizer.lr,np.float32(LR))
# +
def str_shape(x):
return 'x'.join(map(str,x.shape))
def inspect_model(model):
for i,l in enumerate(model.layers):
print i, 'cls=%s name=%s'%(type(l).__name__, l.name)
weights = l.get_weights()
for weight in weights:
print str_shape(weight),
print
# -
inspect_model(model)
# # Load
if FN1 and os.path.exists('data/%s.hdf5'%FN1):
model.load_weights('data/%s.hdf5'%FN1)
# # Test
def lpadd(x, maxlend=maxlend, eos=eos):
"""left (pre) pad a description to maxlend and then add eos.
The eos is the input to predicting the first word in the headline
"""
assert maxlend >= 0
if maxlend == 0:
return [eos]
n = len(x)
if n > maxlend:
x = x[-maxlend:]
n = maxlend
return [empty]*(maxlend-n) + x + [eos]
samples = [lpadd([3]*26)]
# pad from right (post) so the first maxlend will be description followed by headline
data = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')
np.all(data[:,maxlend] == eos)
data.shape,map(len, samples)
probs = model.predict(data, verbose=0, batch_size=1)
probs.shape
# # Sample generation
# this section is only used to generate examples. you can skip it if you just want to understand how the training works
# variation to https://github.com/ryankiros/skip-thoughts/blob/master/decoding/search.py
def beamsearch(predict, start=[empty]*maxlend + [eos],
k=1, maxsample=maxlen, use_unk=True, empty=empty, eos=eos, temperature=1.0):
"""return k samples (beams) and their NLL scores, each sample is a sequence of labels,
all samples starts with an `empty` label and end with `eos` or truncated to length of `maxsample`.
You need to supply `predict` which returns the label probability of each sample.
`use_unk` allow usage of `oov` (out-of-vocabulary) label in samples
"""
def sample(energy, n, temperature=temperature):
"""sample at most n elements according to their energy"""
n = min(n,len(energy))
prb = np.exp(-np.array(energy) / temperature )
res = []
for i in xrange(n):
z = np.sum(prb)
r = np.argmax(np.random.multinomial(1, prb/z, 1))
res.append(r)
prb[r] = 0. # make sure we select each element only once
return res
dead_k = 0 # samples that reached eos
dead_samples = []
dead_scores = []
live_k = 1 # samples that did not yet reached eos
live_samples = [list(start)]
live_scores = [0]
while live_k:
# for every possible live sample calc prob for every possible label
probs = predict(live_samples, empty=empty)
# total score for every sample is sum of -log of word prb
cand_scores = np.array(live_scores)[:,None] - np.log(probs)
cand_scores[:,empty] = 1e20
if not use_unk:
for i in range(nb_unknown_words):
cand_scores[:,vocab_size - 1 - i] = 1e20
live_scores = list(cand_scores.flatten())
# find the best (lowest) scores we have from all possible dead samples and
# all live samples and all possible new words added
scores = dead_scores + live_scores
ranks = sample(scores, k)
n = len(dead_scores)
ranks_dead = [r for r in ranks if r < n]
ranks_live = [r - n for r in ranks if r >= n]
dead_scores = [dead_scores[r] for r in ranks_dead]
dead_samples = [dead_samples[r] for r in ranks_dead]
live_scores = [live_scores[r] for r in ranks_live]
# append the new words to their appropriate live sample
voc_size = probs.shape[1]
live_samples = [live_samples[r//voc_size]+[r%voc_size] for r in ranks_live]
# live samples that should be dead are...
# even if len(live_samples) == maxsample we dont want it dead because we want one
# last prediction out of it to reach a headline of maxlenh
zombie = [s[-1] == eos or len(s) > maxsample for s in live_samples]
# add zombies to the dead
dead_samples += [s for s,z in zip(live_samples,zombie) if z]
dead_scores += [s for s,z in zip(live_scores,zombie) if z]
dead_k = len(dead_samples)
# remove zombies from the living
live_samples = [s for s,z in zip(live_samples,zombie) if not z]
live_scores = [s for s,z in zip(live_scores,zombie) if not z]
live_k = len(live_samples)
return dead_samples + live_samples, dead_scores + live_scores
# +
# # !pip install python-Levenshtein
# -
def keras_rnn_predict(samples, empty=empty, model=model, maxlen=maxlen):
"""for every sample, calculate probability for every possible label
you need to supply your RNN model and maxlen - the length of sequences it can handle
"""
sample_lengths = map(len, samples)
assert all(l > maxlend for l in sample_lengths)
assert all(l[maxlend] == eos for l in samples)
# pad from right (post) so the first maxlend will be description followed by headline
data = sequence.pad_sequences(samples, maxlen=maxlen, value=empty, padding='post', truncating='post')
probs = model.predict(data, verbose=0, batch_size=batch_size)
return np.array([prob[sample_length-maxlend-1] for prob, sample_length in zip(probs, sample_lengths)])
def vocab_fold(xs):
"""convert list of word indexes that may contain words outside vocab_size to words inside.
If a word is outside, try first to use glove_idx2idx to find a similar word inside.
If none exist then replace all accurancies of the same unknown word with <0>, <1>, ...
"""
xs = [x if x < oov0 else glove_idx2idx.get(x,x) for x in xs]
# the more popular word is <0> and so on
outside = sorted([x for x in xs if x >= oov0])
# if there are more than nb_unknown_words oov words then put them all in nb_unknown_words-1
outside = dict((x,vocab_size-1-min(i, nb_unknown_words-1)) for i, x in enumerate(outside))
xs = [outside.get(x,x) for x in xs]
return xs
def vocab_unfold(desc,xs):
# assume desc is the unfolded version of the start of xs
unfold = {}
for i, unfold_idx in enumerate(desc):
fold_idx = xs[i]
if fold_idx >= oov0:
unfold[fold_idx] = unfold_idx
return [unfold.get(x,x) for x in xs]
# +
import sys
import Levenshtein
def gensamples(skips=2, k=10, batch_size=batch_size, short=True, temperature=1., use_unk=True):
i = random.randint(0,len(X_test)-1)
print 'HEAD:',' '.join(idx2word[w] for w in Y_test[i][:maxlenh])
print 'DESC:',' '.join(idx2word[w] for w in X_test[i][:maxlend])
sys.stdout.flush()
print 'HEADS:'
x = X_test[i]
samples = []
if maxlend == 0:
skips = [0]
else:
skips = range(min(maxlend,len(x)), max(maxlend,len(x)), abs(maxlend - len(x)) // skips + 1)
for s in skips:
start = lpadd(x[:s])
fold_start = vocab_fold(start)
sample, score = beamsearch(predict=keras_rnn_predict, start=fold_start, k=k, temperature=temperature, use_unk=use_unk)
assert all(s[maxlend] == eos for s in sample)
samples += [(s,start,scr) for s,scr in zip(sample,score)]
samples.sort(key=lambda x: x[-1])
codes = []
for sample, start, score in samples:
code = ''
words = []
sample = vocab_unfold(start, sample)[len(start):]
for w in sample:
if w == eos:
break
words.append(idx2word[w])
code += chr(w//(256*256)) + chr((w//256)%256) + chr(w%256)
if short:
distance = min([100] + [-Levenshtein.jaro(code,c) for c in codes])
if distance > -0.6:
print score, ' '.join(words)
# print '%s (%.2f) %f'%(' '.join(words), score, distance)
else:
print score, ' '.join(words)
codes.append(code)
# -
gensamples(skips=2, batch_size=batch_size, k=10, temperature=1.)
# # Data generator
# Data generator generates batches of inputs and outputs/labels for training. The inputs are each made from two parts. The first maxlend words are the original description, followed by `eos` followed by the headline which we want to predict, except for the last word in the headline which is always `eos` and then `empty` padding until `maxlen` words.
#
# For each, input, the output is the headline words (without the start `eos` but with the ending `eos`) padded with `empty` words up to `maxlenh` words. The output is also expanded to be y-hot encoding of each word.
# To be more realistic, the second part of the input should be the result of generation and not the original headline.
# Instead we will flip just `nflips` words to be from the generator, but even this is too hard and instead
# implement flipping in a naive way (which consumes less time.) Using the full input (description + eos + headline) generate predictions for outputs. For nflips random words from the output, replace the original word with the word with highest probability from the prediction.
def flip_headline(x, nflips=None, model=None, debug=False):
"""given a vectorized input (after `pad_sequences`) flip some of the words in the second half (headline)
with words predicted by the model
"""
if nflips is None or model is None or nflips <= 0:
return x
batch_size = len(x)
assert np.all(x[:,maxlend] == eos)
probs = model.predict(x, verbose=0, batch_size=batch_size)
x_out = x.copy()
for b in range(batch_size):
# pick locations we want to flip
# 0...maxlend-1 are descriptions and should be fixed
# maxlend is eos and should be fixed
flips = sorted(random.sample(xrange(maxlend+1,maxlen), nflips))
if debug and b < debug:
print b,
for input_idx in flips:
if x[b,input_idx] == empty or x[b,input_idx] == eos:
continue
# convert from input location to label location
# the output at maxlend (when input is eos) is feed as input at maxlend+1
label_idx = input_idx - (maxlend+1)
prob = probs[b, label_idx]
w = prob.argmax()
if w == empty: # replace accidental empty with oov
w = oov0
if debug and b < debug:
print '%s => %s'%(idx2word[x_out[b,input_idx]],idx2word[w]),
x_out[b,input_idx] = w
if debug and b < debug:
print
return x_out
def conv_seq_labels(xds, xhs, nflips=None, model=None, debug=False):
"""description and hedlines are converted to padded input vectors. headlines are one-hot to label"""
batch_size = len(xhs)
assert len(xds) == batch_size
x = [vocab_fold(lpadd(xd)+xh) for xd,xh in zip(xds,xhs)] # the input does not have 2nd eos
x = sequence.pad_sequences(x, maxlen=maxlen, value=empty, padding='post', truncating='post')
x = flip_headline(x, nflips=nflips, model=model, debug=debug)
y = np.zeros((batch_size, maxlenh, vocab_size))
for i, xh in enumerate(xhs):
xh = vocab_fold(xh) + [eos] + [empty]*maxlenh # output does have a eos at end
xh = xh[:maxlenh]
y[i,:,:] = np_utils.to_categorical(xh, vocab_size)
return x, y
def gen(Xd, Xh, batch_size=batch_size, nb_batches=None, nflips=None, model=None, debug=False, seed=seed):
"""yield batches. for training use nb_batches=None
for validation generate deterministic results repeating every nb_batches
while training it is good idea to flip once in a while the values of the headlines from the
value taken from Xh to value generated by the model.
"""
c = nb_batches if nb_batches else 0
while True:
xds = []
xhs = []
if nb_batches and c >= nb_batches:
c = 0
new_seed = random.randint(0, sys.maxint)
random.seed(c+123456789+seed)
for b in range(batch_size):
t = random.randint(0,len(Xd)-1)
xd = Xd[t]
s = random.randint(min(maxlend,len(xd)), max(maxlend,len(xd)))
xds.append(xd[:s])
xh = Xh[t]
s = random.randint(min(maxlenh,len(xh)), max(maxlenh,len(xh)))
xhs.append(xh[:s])
# undo the seeding before we yield inorder not to affect the caller
c+= 1
random.seed(new_seed)
yield conv_seq_labels(xds, xhs, nflips=nflips, model=model, debug=debug)
r = next(gen(X_train, Y_train, batch_size=batch_size))
r[0].shape, r[1].shape, len(r)
def test_gen(gen, n=5):
Xtr,Ytr = next(gen)
for i in range(n):
assert Xtr[i,maxlend] == eos
x = Xtr[i,:maxlend]
y = Xtr[i,maxlend:]
yy = Ytr[i,:]
yy = np.where(yy)[1]
prt('L',yy)
prt('H',y)
if maxlend:
prt('D',x)
test_gen(gen(X_train, Y_train, batch_size=batch_size))
# test fliping
test_gen(gen(X_train, Y_train, nflips=6, model=model, debug=False, batch_size=batch_size))
valgen = gen(X_test, Y_test,nb_batches=3, batch_size=batch_size)
# check that valgen repeats itself after nb_batches
for i in range(4):
test_gen(valgen, n=1)
# # Train
history = {}
traingen = gen(X_train, Y_train, batch_size=batch_size, nflips=nflips, model=model)
valgen = gen(X_test, Y_test, nb_batches=nb_val_samples//batch_size, batch_size=batch_size)
r = next(traingen)
r[0].shape, r[1].shape, len(r)
for iteration in range(500):
print 'Iteration', iteration
h = model.fit_generator(traingen, samples_per_epoch=nb_train_samples,
nb_epoch=1, validation_data=valgen, nb_val_samples=nb_val_samples
)
for k,v in h.history.iteritems():
history[k] = history.get(k,[]) + v
with open('data/%s.history.pkl'%FN,'wb') as fp:
pickle.dump(history,fp,-1)
model.save_weights('data/%s.hdf5'%FN, overwrite=True)
gensamples(batch_size=batch_size)
|
train.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# Here are all the installs and imports you will need for your word cloud script and uploader widget
# !pip install wordcloud
# !pip install fileupload
# !pip install ipywidgets
# !jupyter nbextension install --py --user fileupload
# !jupyter nbextension enable --py fileupload
import numpy as np
import wordcloud
from matplotlib import pyplot as plt
from IPython.display import display
import fileupload
import io
import sys
# + pycharm={"name": "#%%\n"}
def _upload():
_upload_widget = fileupload.FileUploadWidget()
def _cb(change):
global file_contents
decoded = io.StringIO(change['owner'].data.decode('utf-8'))
filename = change['owner'].filename
print('Uploaded `{}` ({:.2f} kB)'.format(
filename, len(decoded.read()) / 2 ** 10))
file_contents = decoded.getvalue()
_upload_widget.observe(_cb, names='data')
display(_upload_widget)
_upload()
|
PartI/Lab_1_1/PY/Google_FinalProject_PartI_Module_6/Word_Cloud/WC.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This Notebook illustrates the use of the the more advanced features of OpenMC's multi-group mode and the openmc.mgxs.Library class. During this process, this notebook will illustrate the following features:
#
# - Calculation of multi-group cross sections for a simplified BWR 8x8 assembly with isotropic and angle-dependent MGXS.
# - Automated creation and storage of MGXS with openmc.mgxs.Library
# - Fission rate comparison between continuous-energy and the two multi-group OpenMC cases.
#
# To avoid focusing on unimportant details, the BWR assembly in this notebook is greatly simplified. The descriptions which follow will point out some areas of simplification.
# # Generate Input Files
# +
import os
import matplotlib.pyplot as plt
import numpy as np
import openmc
# %matplotlib inline
# -
# We will be running a rodded 8x8 assembly with Gadolinia fuel pins. Let's create all the elemental data we would need for this case.
# Instantiate some elements
elements = {}
for elem in ['H', 'O', 'U', 'Zr', 'Gd', 'B', 'C', 'Fe']:
elements[elem] = openmc.Element(elem)
# With the elements we defined, we will now create the materials we will use later.
#
# Material Definition Simplifications:
#
# - This model will be run at room temperature so the NNDC ENDF-B/VII.1 data set can be used but the water density will be representative of a module with around 20% voiding. This water density will be non-physically used in all regions of the problem.
# - Steel is composed of more than just iron, but we will only treat it as such here.
#
# +
materials = {}
# Fuel
materials['Fuel'] = openmc.Material(name='Fuel')
materials['Fuel'].set_density('g/cm3', 10.32)
materials['Fuel'].add_element(elements['O'], 2)
materials['Fuel'].add_element(elements['U'], 1, enrichment=3.)
# Gadolinia bearing fuel
materials['Gad'] = openmc.Material(name='Gad')
materials['Gad'].set_density('g/cm3', 10.23)
materials['Gad'].add_element(elements['O'], 2)
materials['Gad'].add_element(elements['U'], 1, enrichment=3.)
materials['Gad'].add_element(elements['Gd'], .02)
# Zircaloy
materials['Zirc2'] = openmc.Material(name='Zirc2')
materials['Zirc2'].set_density('g/cm3', 6.55)
materials['Zirc2'].add_element(elements['Zr'], 1)
# Boiling Water
materials['Water'] = openmc.Material(name='Water')
materials['Water'].set_density('g/cm3', 0.6)
materials['Water'].add_element(elements['H'], 2)
materials['Water'].add_element(elements['O'], 1)
# <NAME> for the Control Rods
materials['B4C'] = openmc.Material(name='B4C')
materials['B4C'].set_density('g/cm3', 0.7 * 2.52)
materials['B4C'].add_element(elements['B'], 4)
materials['B4C'].add_element(elements['C'], 1)
# Steel
materials['Steel'] = openmc.Material(name='Steel')
materials['Steel'].set_density('g/cm3', 7.75)
materials['Steel'].add_element(elements['Fe'], 1)
# -
# We can now create a Materials object that can be exported to an actual XML file.
# +
# Instantiate a Materials object
materials_file = openmc.Materials(materials.values())
# Export to "materials.xml"
materials_file.export_to_xml()
# -
# Now let's move on to the geometry. The first step is to define some constants which will be used to set our dimensions and then we can start creating the surfaces and regions for the problem, the 8x8 lattice, the rods and the control blade.
#
# Before proceeding let's discuss some simplifications made to the problem geometry:
# - To enable the use of an equal-width mesh for running the multi-group calculations, the intra-assembly gap was increased to the same size as the pitch of the 8x8 fuel lattice
# - The can is neglected
# - The pin-in-water geometry for the control blade is ignored and instead the blade is a solid block of B4C
# - Rounded corners are ignored
# - There is no cladding for the water rod
# +
# Set constants for the problem and assembly dimensions
fuel_rad = 0.53213
clad_rad = 0.61341
Np = 8
pin_pitch = 1.6256
length = float(Np + 2) * pin_pitch
assembly_width = length - 2. * pin_pitch
rod_thick = 0.47752 / 2. + 0.14224
rod_span = 7. * pin_pitch
surfaces = {}
# Create boundary planes to surround the geometry
surfaces['Global x-'] = openmc.XPlane(x0=0., boundary_type='reflective')
surfaces['Global x+'] = openmc.XPlane(x0=length, boundary_type='reflective')
surfaces['Global y-'] = openmc.YPlane(y0=0., boundary_type='reflective')
surfaces['Global y+'] = openmc.YPlane(y0=length, boundary_type='reflective')
# Create cylinders for the fuel and clad
surfaces['Fuel Radius'] = openmc.ZCylinder(R=fuel_rad)
surfaces['Clad Radius'] = openmc.ZCylinder(R=clad_rad)
surfaces['Assembly x-'] = openmc.XPlane(x0=pin_pitch)
surfaces['Assembly x+'] = openmc.XPlane(x0=length - pin_pitch)
surfaces['Assembly y-'] = openmc.YPlane(y0=pin_pitch)
surfaces['Assembly y+'] = openmc.YPlane(y0=length - pin_pitch)
# Set surfaces for the control blades
surfaces['Top Blade y-'] = openmc.YPlane(y0=length - rod_thick)
surfaces['Top Blade x-'] = openmc.XPlane(x0=pin_pitch)
surfaces['Top Blade x+'] = openmc.XPlane(x0=rod_span)
surfaces['Left Blade x+'] = openmc.XPlane(x0=rod_thick)
surfaces['Left Blade y-'] = openmc.YPlane(y0=length - rod_span)
surfaces['Left Blade y+'] = openmc.YPlane(y0=9. * pin_pitch)
# -
# With the surfaces defined, we can now construct regions with these surfaces before we use those to create cells
# Set regions for geometry building
regions = {}
regions['Global'] = \
(+surfaces['Global x-'] & -surfaces['Global x+'] &
+surfaces['Global y-'] & -surfaces['Global y+'])
regions['Assembly'] = \
(+surfaces['Assembly x-'] & -surfaces['Assembly x+'] &
+surfaces['Assembly y-'] & -surfaces['Assembly y+'])
regions['Fuel'] = -surfaces['Fuel Radius']
regions['Clad'] = +surfaces['Fuel Radius'] & -surfaces['Clad Radius']
regions['Water'] = +surfaces['Clad Radius']
regions['Top Blade'] = \
(+surfaces['Top Blade y-'] & -surfaces['Global y+']) & \
(+surfaces['Top Blade x-'] & -surfaces['Top Blade x+'])
regions['Top Steel'] = \
(+surfaces['Global x-'] & -surfaces['Top Blade x-']) & \
(+surfaces['Top Blade y-'] & -surfaces['Global y+'])
regions['Left Blade'] = \
(+surfaces['Left Blade y-'] & -surfaces['Left Blade y+']) & \
(+surfaces['Global x-'] & -surfaces['Left Blade x+'])
regions['Left Steel'] = \
(+surfaces['Left Blade y+'] & -surfaces['Top Blade y-']) & \
(+surfaces['Global x-'] & -surfaces['Left Blade x+'])
regions['Corner Blade'] = \
regions['Left Steel'] | regions['Top Steel']
regions['Water Fill'] = \
regions['Global'] & ~regions['Assembly'] & \
~regions['Top Blade'] & ~regions['Left Blade'] &\
~regions['Corner Blade']
# We will begin building the 8x8 assembly. To do that we will have to build the cells and universe for each pin type (fuel, gadolinia-fuel, and water).
# +
universes = {}
cells = {}
for name, mat, in zip(['Fuel Pin', 'Gd Pin'],
[materials['Fuel'], materials['Gad']]):
universes[name] = openmc.Universe(name=name)
cells[name] = openmc.Cell(name=name)
cells[name].fill = mat
cells[name].region = regions['Fuel']
universes[name].add_cell(cells[name])
cells[name + ' Clad'] = openmc.Cell(name=name + ' Clad')
cells[name + ' Clad'].fill = materials['Zirc2']
cells[name + ' Clad'].region = regions['Clad']
universes[name].add_cell(cells[name + ' Clad'])
cells[name + ' Water'] = openmc.Cell(name=name + ' Water')
cells[name + ' Water'].fill = materials['Water']
cells[name + ' Water'].region = regions['Water']
universes[name].add_cell(cells[name + ' Water'])
universes['Hole'] = openmc.Universe(name='Hole')
cells['Hole'] = openmc.Cell(name='Hole')
cells['Hole'].fill = materials['Water']
universes['Hole'].add_cell(cells['Hole'])
# -
# Let's use this pin information to create our 8x8 assembly.
# +
# Create fuel assembly Lattice
universes['Assembly'] = openmc.RectLattice(name='Assembly')
universes['Assembly'].pitch = (pin_pitch, pin_pitch)
universes['Assembly'].lower_left = [pin_pitch, pin_pitch]
f = universes['Fuel Pin']
g = universes['Gd Pin']
h = universes['Hole']
lattices = [[f, f, f, f, f, f, f, f],
[f, f, f, f, f, f, f, f],
[f, f, f, g, f, g, f, f],
[f, f, g, h, h, f, g, f],
[f, f, f, h, h, f, f, f],
[f, f, g, f, f, f, g, f],
[f, f, f, g, f, g, f, f],
[f, f, f, f, f, f, f, f]]
# Store the array of lattice universes
universes['Assembly'].universes = lattices
cells['Assembly'] = openmc.Cell(name='Assembly')
cells['Assembly'].fill = universes['Assembly']
cells['Assembly'].region = regions['Assembly']
# -
# So far we have the rods and water within the assembly , but we still need the control blade and the water which fills the rest of the space. We will create those cells now
# +
# The top portion of the blade, poisoned with B4C
cells['Top Blade'] = openmc.Cell(name='Top Blade')
cells['Top Blade'].fill = materials['B4C']
cells['Top Blade'].region = regions['Top Blade']
# The left portion of the blade, poisoned with B4C
cells['Left Blade'] = openmc.Cell(name='Left Blade')
cells['Left Blade'].fill = materials['B4C']
cells['Left Blade'].region = regions['Left Blade']
# The top-left corner portion of the blade, with no poison
cells['Corner Blade'] = openmc.Cell(name='Corner Blade')
cells['Corner Blade'].fill = materials['Steel']
cells['Corner Blade'].region = regions['Corner Blade']
# Water surrounding all other cells and our assembly
cells['Water Fill'] = openmc.Cell(name='Water Fill')
cells['Water Fill'].fill = materials['Water']
cells['Water Fill'].region = regions['Water Fill']
# -
# OpenMC requires that there is a "root" universe. Let us create our root universe and fill it with the cells just defined.
# Create root Universe
universes['Root'] = openmc.Universe(name='root universe', universe_id=0)
universes['Root'].add_cells([cells['Assembly'], cells['Top Blade'],
cells['Corner Blade'], cells['Left Blade'],
cells['Water Fill']])
# What do you do after you create your model? Check it! We will use the plotting capabilities of the Python API to do this for us.
#
# When doing so, we will coloring by material with fuel being red, gadolinia-fuel as yellow, zirc cladding as a light grey, water as blue, B4C as black and steel as a darker gray.
universes['Root'].plot(origin=(length / 2., length / 2., 0.),
pixels=(500, 500), width=(length, length),
color_by='material',
colors={materials['Fuel']: (1., 0., 0.),
materials['Gad']: (1., 1., 0.),
materials['Zirc2']: (0.5, 0.5, 0.5),
materials['Water']: (0.0, 0.0, 1.0),
materials['B4C']: (0.0, 0.0, 0.0),
materials['Steel']: (0.4, 0.4, 0.4)})
# Looks pretty good to us!
#
# We now must create a geometry that is assigned a root universe and export it to XML.
# +
# Create Geometry and set root universe
geometry = openmc.Geometry(universes['Root'])
# Export to "geometry.xml"
geometry.export_to_xml()
# -
# With the geometry and materials finished, we now just need to define simulation parameters, including how to run the model and what we want to learn from the model (i.e., define the tallies). We will start with our simulation parameters in the next block.
#
# This will include setting the run strategy, telling OpenMC not to bother creating a `tallies.out` file, and limiting the verbosity of our output to just the header and results to not clog up our notebook with results from each batch.
# +
# OpenMC simulation parameters
batches = 1000
inactive = 20
particles = 1000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
settings_file.verbosity = 4
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [pin_pitch, pin_pitch, 10, length - pin_pitch, length - pin_pitch, 10]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
# -
# # Create an MGXS Library
#
# Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625, 20.0e6])
# Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the problem geometry. This library will use the default setting of isotropically-weighting the multi-group cross sections.
# Initialize a 2-group Isotropic MGXS Library for OpenMC
iso_mgxs_lib = openmc.mgxs.Library(geometry)
iso_mgxs_lib.energy_groups = groups
# Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions.
#
# Just like before, we will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections: "total", "absorption", "nu-fission", '"fission", "nu-scatter matrix", "multiplicity matrix", and "chi".
# "multiplicity matrix" is needed to provide OpenMC's multi-group mode with additional information needed to accurately treat scattering multiplication (i.e., (n,xn) reactions)) explicitly.
# Specify multi-group cross section types to compute
iso_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
# Now we must specify the type of domain over which we would like the `Library` to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the `Library` supports "material" "cell", "universe", and "mesh" domain types.
#
# For the sake of example we will use a mesh to gather our cross sections. This mesh will be set up so there is one mesh bin for every pin cell.
# +
# Instantiate a tally Mesh
mesh = openmc.Mesh()
mesh.type = 'regular'
mesh.dimension = [10, 10]
mesh.lower_left = [0., 0.]
mesh.upper_right = [length, length]
# Specify a "mesh" domain type for the cross section tally filters
iso_mgxs_lib.domain_type = "mesh"
# Specify the mesh over which to compute multi-group cross sections
iso_mgxs_lib.domains = [mesh]
# -
# Now we will set the scattering treatment that we wish to use.
#
# In the [mg-mode-part-ii](mg-mode-part-ii.html) notebook, the cross sections were generated with a typical P3 scattering expansion in mind. Now, however, we will use a more advanced technique: OpenMC will directly provide us a histogram of the change-in-angle (i.e., $\mu$) distribution.
#
# Where as in the [mg-mode-part-ii](mg-mode-part-ii.html) notebook, all that was required was to set the `legendre_order` attribute of `mgxs_lib`, here we have only slightly more work: we have to tell the Library that we want to use a histogram distribution (as it is not the default), and then tell it the number of bins.
#
# For this problem we will use 11 bins.
# +
# Set the scattering format to histogram and then define the number of bins
# Avoid a warning that corrections don't make sense with histogram data
iso_mgxs_lib.correction = None
# Set the histogram data
iso_mgxs_lib.scatter_format = 'histogram'
iso_mgxs_lib.histogram_bins = 11
# -
# Ok, we made our isotropic library with histogram-scattering!
#
# Now why don't we go ahead and create a library to do the same, but with angle-dependent MGXS. That is, we will avoid making the isotropic flux weighting approximation and instead just store a cross section for every polar and azimuthal angle pair.
#
# To do this with the Python API and OpenMC, all we have to do is set the number of polar and azimuthal bins. Here we only need to set the number of bins, the API will convert all of angular space into equal-width bins for us.
#
# Since this problem is symmetric in the z-direction, we only need to concern ourselves with the azimuthal variation here. We will use eight angles.
#
# Ok, we will repeat all the above steps for a new library object, but will also set the number of azimuthal bins at the end.
# +
# Let's repeat all of the above for an angular MGXS library so we can gather
# that in the same continuous-energy calculation
angle_mgxs_lib = openmc.mgxs.Library(geometry)
angle_mgxs_lib.energy_groups = groups
angle_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
angle_mgxs_lib.domain_type = "mesh"
angle_mgxs_lib.domains = [mesh]
angle_mgxs_lib.correction = None
angle_mgxs_lib.scatter_format = 'histogram'
angle_mgxs_lib.histogram_bins = 11
# Set the angular bins to 8
angle_mgxs_lib.num_azimuthal = 8
# -
# Now that our libraries have been setup, let's make sure they contain the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of the `mgxs_lib.write_mg_library()`), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
# Check the libraries - if no errors are raised, then the library is satisfactory.
iso_mgxs_lib.check_library_for_openmc_mgxs()
angle_mgxs_lib.check_library_for_openmc_mgxs()
# Lastly, we use our two `Library` objects to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
#
# We expect a warning here telling us that the default Legendre order is not meaningful since we are using histogram scattering.
# Construct all tallies needed for the multi-group cross section library
iso_mgxs_lib.build_library()
angle_mgxs_lib.build_library()
# The tallies within the libraries can now be exported to a "tallies.xml" input file for OpenMC.
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
iso_mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
angle_mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# In addition, we instantiate a fission rate mesh tally for eventual comparison of results.
# +
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally, merge=True)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
# -
# Time to run the calculation and get our results!
# Run OpenMC
openmc.run()
# To make the files available and not be over-written when running the multi-group calculation, we will now rename the statepoint and summary files.
# Move the StatePoint File
ce_spfile = './statepoint_ce.h5'
os.rename('statepoint.' + str(batches) + '.h5', ce_spfile)
# Move the Summary file
ce_sumfile = './summary_ce.h5'
os.rename('summary.h5', ce_sumfile)
# # Tally Data Processing
#
# Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file, but not automatically linking the summary file.
# Load the statepoint file, but not the summary file, as it is a different filename than expected.
sp = openmc.StatePoint(ce_spfile, autolink=False)
# In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the `openmc.Library` to properly process the tally data. We first create a `Summary` object and link it with the statepoint. Normally this would not need to be performed, but since we have renamed our summary file to avoid conflicts with the Multi-Group calculation's summary file, we will load this in explicitly.
su = openmc.Summary(ce_sumfile)
sp.link_with_summary(su)
# The statepoint is now ready to be analyzed. To create our libraries we simply have to load the tallies from the statepoint into each `Library` and our `MGXS` objects will compute the cross sections for us under-the-hood.
# Initialize MGXS Library with OpenMC statepoint data
iso_mgxs_lib.load_from_statepoint(sp)
angle_mgxs_lib.load_from_statepoint(sp)
# The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
# # Isotropic Multi-Group OpenMC Calculation
# We will now use the `Library` to produce the isotropic multi-group cross section data set for use by the OpenMC multi-group solver.
#
# If the model to be run in multi-group mode is the same as the continuous-energy mode, the `openmc.mgxs.Library` class has the ability to directly create the multi-group geometry, materials, and multi-group library for us.
# Note that this feature is only useful if the MG model is intended to replicate the CE geometry - it is not useful if the CE library is not the same geometry (like it would be for generating MGXS from a generic spectral region).
#
# This method creates and assigns the materials automatically, including creating a geometry which is equivalent to our mesh cells for which the cross sections were derived.
# +
# Allow the API to create our Library, materials, and geometry file
iso_mgxs_file, materials_file, geometry_file = iso_mgxs_lib.create_mg_mode()
# Tell the materials file what we want to call the multi-group library
materials_file.cross_sections = 'mgxs.h5'
# Write our newly-created files to disk
iso_mgxs_file.export_to_hdf5('mgxs.h5')
materials_file.export_to_xml()
geometry_file.export_to_xml()
# -
# Next, we can make the changes we need to the settings file.
# These changes are limited to telling OpenMC to run a multi-group calculation and provide the location of our multi-group cross section file.
# +
# Set the energy mode
settings_file.energy_mode = 'multi-group'
# Export to "settings.xml"
settings_file.export_to_xml()
# -
# Let's clear up the tallies file so it doesn't include all the extra tallies for re-generating a multi-group library
# +
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
# Add our fission rate mesh tally
tallies_file.add_tally(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# -
# Before running the calculation let's look at our meshed model. It might not be interesting, but let's take a look anyways.
geometry_file.root_universe.plot(origin=(length / 2., length / 2., 0.),
pixels=(300, 300), width=(length, length),
color_by='material')
# So, we see a 10x10 grid with a different color for every material, sounds good!
#
# At this point, the problem is set up and we can run the multi-group calculation.
# Execute the Isotropic MG OpenMC Run
openmc.run()
# Before we go the angle-dependent case, let's save the StatePoint and Summary files so they don't get over-written
# Move the StatePoint File
iso_mg_spfile = './statepoint_mg_iso.h5'
os.rename('statepoint.' + str(batches) + '.h5', iso_mg_spfile)
# Move the Summary file
iso_mg_sumfile = './summary_mg_iso.h5'
os.rename('summary.h5', iso_mg_sumfile)
# # Angle-Dependent Multi-Group OpenMC Calculation
#
# Let's now run the calculation with the angle-dependent multi-group cross sections. This process will be the exact same as above, except this time we will use the angle-dependent Library as our starting point.
#
# We do not need to re-write the materials, geometry, or tallies file to disk since they are the same as for the isotropic case.
# Let's repeat for the angle-dependent case
angle_mgxs_lib.load_from_statepoint(sp)
angle_mgxs_file, materials_file, geometry_file = angle_mgxs_lib.create_mg_mode()
angle_mgxs_file.export_to_hdf5()
# At this point, the problem is set up and we can run the multi-group calculation.
# Execute the angle-dependent OpenMC Run
openmc.run()
# # Results Comparison
# In this section we will compare the eigenvalues and fission rate distributions of the continuous-energy, isotropic multi-group and angle-dependent multi-group cases.
#
# We will begin by loading the multi-group statepoint files, first the isotropic, then angle-dependent. The angle-dependent was not renamed, so we can autolink its summary.
# +
# Load the isotropic statepoint file
iso_mgsp = openmc.StatePoint(iso_mg_spfile, autolink=False)
iso_mgsum = openmc.Summary(iso_mg_sumfile)
iso_mgsp.link_with_summary(iso_mgsum)
# Load the angle-dependent statepoint file
angle_mgsp = openmc.StatePoint('statepoint.' + str(batches) + '.h5')
# -
# ## Eigenvalue Comparison
# Next, we can load the eigenvalues for comparison and do that comparison
# +
ce_keff = sp.k_combined
iso_mg_keff = iso_mgsp.k_combined
angle_mg_keff = angle_mgsp.k_combined
# Find eigenvalue bias
iso_bias = 1.0E5 * (ce_keff[0] - iso_mg_keff[0])
angle_bias = 1.0E5 * (ce_keff[0] - angle_mg_keff[0])
# -
# Let's compare the eigenvalues in units of pcm
print('Isotropic to CE Bias [pcm]: {0:1.1f}'.format(iso_bias))
print('Angle to CE Bias [pcm]: {0:1.1f}'.format(angle_bias))
# We see a large reduction in error by switching to the usage of angle-dependent multi-group cross sections!
#
# Of course, this rodded and partially voided BWR problem was chosen specifically to exacerbate the angular variation of the reaction rates (and thus cross sections). Such improvements should not be expected in every case, especially if localized absorbers are not present.
#
# It is important to note that both eigenvalues can be improved by the application of finer geometric or energetic discretizations, but this shows that the angle discretization may be a factor for consideration.
#
# ## Fission Rate Distribution Comparison
# Next we will visualize the mesh tally results obtained from our three cases.
#
# This will be performed by first obtaining the one-group fission rate tally information from our state point files. After we have this information we will re-shape the data to match the original mesh laydown. We will then normalize, and finally create side-by-side plots of all.
sp_files = [sp, iso_mgsp, angle_mgsp]
titles = ['Continuous-Energy', 'Isotropic Multi-Group',
'Angle-Dependent Multi-Group']
fiss_rates = []
fig = plt.figure(figsize=(12, 6))
for i, (case, title) in enumerate(zip(sp_files, titles)):
# Get our mesh tally information
mesh_tally = case.get_tally(name='mesh tally')
fiss_rates.append(mesh_tally.get_values(scores=['fission']))
# Reshape the array
fiss_rates[-1].shape = mesh.dimension
# Normalize the fission rates
fiss_rates[-1] /= np.mean(fiss_rates[-1][fiss_rates[-1] > 0.])
# Set 0s to NaNs so they show as white
fiss_rates[-1][fiss_rates[-1] == 0.] = np.nan
fig = plt.subplot(1, len(titles), i + 1)
# Plot only the fueled regions
plt.imshow(fiss_rates[-1][1:-1, 1:-1], cmap='jet', origin='lower',
vmin=0.4, vmax=4.)
plt.title(title + '\nFission Rates')
# With this colormap, dark blue is the lowest power and dark red is the highest power.
#
# We see general agreement between the fission rate distributions, but it looks like there may be less of a gradient near the rods in the continuous-energy and angle-dependent MGXS cases than in the isotropic MGXS case.
#
# To better see the differences, let's plot ratios of the fission powers for our two multi-group cases compared to the continuous-energy case t
# +
# Calculate and plot the ratios of MG to CE for each of the 2 MG cases
ratios = []
fig, axes = plt.subplots(figsize=(12, 6), nrows=1, ncols=2)
for i, (case, title, axis) in enumerate(zip(sp_files[1:], titles[1:], axes.flat)):
# Get our ratio relative to the CE (in fiss_ratios[0])
ratios.append(np.divide(fiss_rates[i + 1], fiss_rates[0]))
# Plot only the fueled regions
im = axis.imshow(ratios[-1][1:-1, 1:-1], cmap='bwr', origin='lower',
vmin = 0.9, vmax = 1.1)
axis.set_title(title + '\nFission Rates Relative\nto Continuous-Energy')
# Add a color bar
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig.colorbar(im, cax=cbar_ax)
# -
# With this ratio its clear that the errors are significantly worse in the isotropic case. These errors are conveniently located right where the most anisotropy is espected: by the control blades and by the Gd-bearing pins!
|
examples/jupyter/mg-mode-part-iii.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### 상대방을 공격하여 체력을 깎는 게임캐릭터 Marine을 클래스로 만들어보자
# - 체력(health), 와 공격력(attack_pw)을 생성자 함수로 받는다
# - 해당 캐릭터가 상대(who)를 공격하면 상대의 체력이 공격력만큼 감소한다
# - 공격후의 공격자의 체력과 공격받은자의 체력을 보여준다
# +
class Marine:
def __init__(self,health, attack_pw):
self.health = health
self.attack_pw= attack_pw
def attack(self,who):
who.health -= self.attack_pw
return '공격자 health: {}, 공격받은자 health:{} '.format( self.health, who.health)
marine1 = Marine(40,5)
marine2 = Marine(40,3)
print(marine1.attack(marine2))
print(marine1.attack(marine2))
print(marine2.attack(marine1))
# +
class Marine:
def __init__(self,health, attack_pw):
##코드작성 1
def attack(self,who):
## 코드작성 2
return '공격자 health: {}, 공격받은자 health:{} '.format( ##코드작성 3 )
marine1 = Marine(40,5)
marine2 = Marine(40,5)
print(marine1.attack(marine2))
# +
class Calculator:
def add (self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 + self.num2
def multiple (self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 * self.num2
cal1 = Calculator()
temp = cal1.add(3,4)
cal1.multiple(5,temp)
# +
class Calculator:
def add (self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 + self.num2
def multiple (self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 * self.num2
//코드작성
# -
class Calculator2(Calculator):
pass
cal2 = Calculator2()
cal2.add(3,4)
class //코드 작성하여 함수 선언
pass
# +
import random
class SelectMenu:
def __init__(self, menu_ls):
#코드작성 1
def get_menu(self):
#코드작성2
menu = SelectMenu(['짬뽕', '초밥', '쌀국수', '주꾸미'])
print(menu.get_menu())
# +
import random
class SelectMenu:
def __init__(self, menu_ls):
self.menu_ls = menu_ls
def get_menu(self):
choice = random.choice(self.menu_ls)
return choice
'''
def get_menu(self):
choice = self.menu_ls[random.randint(0,len(self.menu_ls)-1)]
return choice
'''
menu = SelectMenu(['짬뽕', '초밥', '쌀국수', '주꾸미'])
print(menu.get_menu())
# -
# %%writefile Calc.py
class Calculator:
def add (self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 + self.num2
def multiple (self, num1, num2):
self.num1 = num1
self.num2 = num2
return self.num1 * self.num2
import Calc
cal1 = Calc.Calculator()
cal1.add(3,4)
|
python/MultiCampus/test_01_class.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import random
import string
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
import matplotlib.cm as cm
from matplotlib.colors import Normalize
import numpy.matlib as npmatlib
# %matplotlib inline
import dataloader
import util
from scipy.optimize import minimize
#load data
dataset = dataloader.DataLoader(verbose=True)
x_train, x_test, y_train, y_test, y_reg_train, y_reg_test = dataset.load_data()
#vectorize the images and data
x_train = np.reshape(x_train, [x_train.shape[0], x_train.shape[1]*x_train.shape[2]]).T
x_test = np.reshape(x_test, [x_test.shape[0], x_test.shape[1]*x_test.shape[2]]).T
y_reg_train = y_reg_train.T
y_reg_test = y_reg_test.T
#forward model (i.e. simualator)
G = np.load('G.npy')
#linear least square solution
sz = 28
ref = 3
num_samples = 100
#dobs
d_obs = np.squeeze(np.multiply(y_reg_test[:, ref:ref+1], np.expand_dims(dataset.maxs, axis=-1)))
m_ref = np.squeeze(x_test[:, ref:ref+1])
#color by label
my_cmap = cm.get_cmap('jet')
my_norm = Normalize(vmin=0, vmax=9)
cs = my_cmap(my_norm(y_test))
print(x_train.shape)
print(x_test.shape)
print(y_reg_train.shape)
print(y_reg_test.shape)
print(G.shape)
print(d_obs.shape)
print(m_ref.shape)
# +
#equation to solve
def func(m):
return np.sqrt(np.mean(np.square(G.T@m - d_obs)))
#gradient of the equation
def dldm(m):
return np.squeeze(G@(G.T@m - d_obs))
# +
#callback to monitor optimization process
from IPython.display import clear_output
i = 0
x = []
losses = []
logs = []
def monitor(xk):
global i, x, losses, logs
fig = plt.figure(figsize=[15, 5])
logs.append(logs)
x.append(i)
losses.append(func(xk))
i += 1
clear_output(wait=True)
plt.subplot(1, 2, 1)
plt.plot(x, losses, label="loss", c = 'green')
plt.ylabel("Loss function")
plt.xlabel("Iter.")
plt.title("Loss vs iter.")
plt.subplot(1, 2, 2)
plt.imshow(np.reshape(xk, [sz, sz]), cmap="viridis", aspect='equal', vmin=0, vmax=1)
plt.xticks([]), plt.yticks([])
plt.title("Inv. model")
plt.show()
fig.savefig('readme/grad_full_dim.png')
# +
#initial guess (sensitive!)
m0 = np.random.normal(size=m_ref.shape)*0.0
print(m0.shape)
#minimize the objective function
res = minimize(func, m0, method='BFGS', jac=dldm, callback=monitor, options={'gtol': 1e-6, 'disp': True})
m_sol = np.expand_dims(res.x, axis=-1)
print(m_sol.shape)
# +
#forward simulation on the inverted model
y_sim = (m_sol.T@G).T
#compare model and data (i.e. reference case vs solution)
f = plt.figure(figsize=(10, 3))
plt.subplot(1, 3, 1)
plt.imshow(np.reshape(m_ref, [sz, sz]), cmap="viridis", vmin=0, vmax=1, aspect='equal')
plt.xticks([]), plt.yticks([])
plt.colorbar()
plt.title("Ref model")
plt.subplot(1, 3, 2)
plt.imshow(np.reshape(m_sol, [sz, sz]), cmap="viridis", aspect='equal')
plt.xticks([]), plt.yticks([])
plt.colorbar()
plt.title("Inv. model")
plt.subplot(1, 3, 3)
plt.plot(np.squeeze(np.multiply(y_reg_test[:, ref:ref+1], np.expand_dims(dataset.maxs, axis=-1))), ls=':', c='k', label='True', alpha=0.9)
plt.plot(y_sim, c=cs[y_test[ref]], label='Sim.', alpha=0.4)
#plt.ylim([0, 1])
plt.title("Data")
plt.legend()
plt.tight_layout()
f.savefig('readme/grad_full_dim_comp.png')
# -
|
gradient-full-dim.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"></ul></div>
# +
import os
import datetime
import gc
import numpy as np
import pandas as pd
import lightgbm
from sklearn.preprocessing import StandardScaler
from tqdm import tqdm_notebook
### feature engineer part
import techjam_fe
# -
# Edit data directory here
DATA_DIR = ".\\techjam"
X_train, y_train, X_test = get_prep_data(DATA_DIR)
cat_feature = ['gender','ocp_cd','age_gnd','gnd_ocp','age_ocp', 'age']
def techjam_score(y_pred, y_true):
y_pred = np.array(y_pred)
y_true = np.array(y_true)
return 100 - 100 * np.mean((y_pred-y_true) ** 2 / (np.minimum(2*y_true, y_pred) + y_true)**2)
def techjam_feval_log(y_pred, dtrain):
y_true = dtrain.get_label()
return 'techjam_score', techjam_score(np.exp(y_pred), np.exp(y_true)), True
# +
for cat in cat_feature:
X_test[cat] =X_test[cat].astype(int)
X_train[cat] =X_train[cat].astype(int)
train_data = lightgbm.Dataset(X_train, label=y_train, categorical_feature=cat_feature , free_raw_data=False)
num_leaves_choices = [15, 31, 63, 127, 200, 255, 300, 350, 400,511 ,600]
ft_frac_choices = [0.6, 0.7, 0.8, 0.9, 1.0]
bagging_frac_choices = [0.6, 0.7, 0.8, 0.9, 1.0]
# We will store the cross validation results in a simple list,
# with tuples in the form of (hyperparam dict, cv score):
cv_results = []
for num_lv in tqdm_notebook(num_leaves_choices):
for bg_fac in bagging_frac_choices:
for ft_fac in ft_frac_choices:
hyperparams = {"boosting_type":'gbdt',
"objective": 'mape',
"metrics": 'None',
"num_leaves": num_lv,
"feature_fraction": ft_fac,
"bagging_fraction": bg_fac,
"learning_rate": 0.01
}
validation_summary = lightgbm.cv(hyperparams,
train_data,
num_boost_round=10000,
nfold=5,
feval=techjam_feval_log,
stratified=False,
shuffle=True,
early_stopping_rounds=50,
verbose_eval=10)
optimal_num_trees = len(validation_summary["techjam_score-mean"])
# to the hyperparameter dictionary:
hyperparams["num_boost_round"] = optimal_num_trees
# And we append results to cv_results:
cv_results.append((hyperparams, validation_summary["techjam_score-mean"][-1]))
# -
sort_cv_result = sorted(cv_results, key=lambda tup:tup[1])
sort_cv_result[-1]
# +
#select parameter score > 92.21
# -
### select best 10 models
MODELS=[]
for params_and_score in tqdm_notebook(sort_cv_result[-10:]):
params = params_and_score[0]
model = lightgbm.train(params,
train_data,
)
MODELS.append(model)
### ensemble 10 models
pred = []
for model in MODELS:
y_pred = model.predict(X_test)
y_pred = np.exp(y_pred)
pred.append(y_pred)
pred=np.array(pred)
# perform ensemble
final_pred = pred.mean(axis=0)
### Create submission dataframe
submission = pd.DataFrame()
submission['id'] = [i for i in range(50001,65001)]
submission['final_pred'] = final_pred
|
techjam_model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7 local venv
# language: python
# name: python3.7
# ---
# # Extracting ellipse parmeters from rings
#
# During a powder diffraction experiment, the scattering occures along cconcentric cones, originating from the sample position and named after 2 famous scientists: Debye and Scherrer.
#
# 
#
# Those cones are intersected by the detector and all the calibration step in pyFAI comes down is fitting the "ring" seen on the detector into a meaningful experimental geometry.
#
# In the most common case, a flat detector is mounted orthogonal to the incident beam and all pixel have the same size.
# The diffraction patern is then a set of concentric cercles.
# When the detector is still flat and all the pixels are the same but the mounting may be a bit *off*, or maybe for other technical reason one gets a set of concentric ellipses.
# This procedures explains how to extract the center coordinates, axis lengths and orientation.
#
# The code in pyFAI is heavily inspired from:
# http://nicky.vanforeest.com/misc/fitEllipse/fitEllipse.html
# It uses a SVD decomposition in a similar way to the Wolfgang Kabsch's algorithm (1976) to retrieve the best ellipse fitting all point without actually performing a fit.
#
# %matplotlib inline
from matplotlib import pyplot
from pyFAI.utils.ellipse import fit_ellipse
import inspect
print(inspect.getsource(fit_ellipse))
# +
from matplotlib import patches
from numpy import rad2deg
def display(ptx, pty, ellipse=None):
"""A function to overlay a set of points and the calculated ellipse
"""
fig = pyplot.figure()
ax = fig.add_subplot(111)
if ellipse is not None:
error = False
y0, x0, angle, wlong, wshort = ellipse
if wshort == 0:
error = True
wshort = 0.0001
if wlong == 0:
error = True
wlong = 0.0001
patch = patches.Arc((x0, y0), width=wlong*2, height=wshort*2, angle=rad2deg(angle))
if error:
patch.set_color("red")
else:
patch.set_color("green")
ax.add_patch(patch)
bbox = patch.get_window_extent()
ylim = min(y0 - wlong, pty.min()), max(y0 + wlong, pty.max())
xlim = min(x0 - wlong, ptx.min()), max(x0 - wlong, ptx.max())
else:
ylim = pty.min(), pty.max()
xlim = ptx.min(), ptx.max()
ax.plot(ptx, pty, "ro", color="blue")
ax.set_xlim(*xlim)
ax.set_ylim(*ylim)
pyplot.show()
# +
from numpy import sin, cos, random, pi, linspace
arc = 0.8
npt = 100
R = linspace(0, arc * pi, npt)
ptx = 1.5 * cos(R) + 2 + random.normal(scale=0.05, size=npt)
pty = sin(R) + 1. + random.normal(scale=0.05, size=npt)
ellipse = fit_ellipse(pty, ptx)
print(ellipse)
display(ptx, pty, ellipse)
# -
angles = linspace(0, pi / 2, 10)
pty = sin(angles) * 20 + 10
ptx = cos(angles) * 20 + 10
ellipse = fit_ellipse(pty, ptx)
print(ellipse)
display(ptx, pty, ellipse)
angles = linspace(0, pi * 2, 6, endpoint=False)
pty = sin(angles) * 10 + 50
ptx = cos(angles) * 20 + 100
ellipse = fit_ellipse(pty, ptx)
print(ellipse)
display(ptx, pty, ellipse)
# Center to zero
angles = linspace(0, 2*pi, 9, endpoint=False)
pty = sin(angles) * 10 + 0
ptx = cos(angles) * 20 + 0
ellipse = fit_ellipse(pty, ptx)
print(ellipse)
display(ptx, pty, ellipse)
angles = linspace(0, 2 * pi, 9, endpoint=False)
pty = 50 + 10 * cos(angles) + 5 * sin(angles)
ptx = 100 + 5 * cos(angles) + 15 * sin(angles)
ellipse = fit_ellipse(pty, ptx)
print(ellipse)
display(ptx, pty, ellipse)
# Points from real peaking
from numpy import array
pty = array([0.06599215, 0.06105629, 0.06963708, 0.06900191, 0.06496001, 0.06352082, 0.05923421, 0.07080027, 0.07276284, 0.07170048])
ptx = array([0.05836343, 0.05866434, 0.05883284, 0.05872581, 0.05823667, 0.05839846, 0.0591999, 0.05907079, 0.05945377, 0.05909428])
try:
ellipse = fit_ellipse(pty, ptx)
except Exception as e:
ellipse = None
print(e)
display(ptx, pty, ellipse)
# Line
from numpy import arange
pty = arange(10)
ptx = arange(10)
try:
ellipse = fit_ellipse(pty, ptx)
except Exception as e:
ellipse = None
print(e)
display(ptx, pty, ellipse)
# ## Conclusion
# Within pyFAI's calibration process, the parameters of the ellipse are used in first instance as input guess for starting the fit procedure, which uses *slsqp* from scipy.optimize.
|
doc/source/usage/tutorial/Ellipse/ellipse.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from tkinter import*
import FinalLogics
import Constant as c
class Game2048(Frame):
def __init__(self):
Frame.__init__(self)
self.grid()
self.master.title('2048')
self.master.bind("<Key>",self.key_down)
self.commands = {c.KEY_UP: FinalLogics.move_up, c.KEY_DOWN: FinalLogics.move_down, c.KEY_LEFT: FinalLogics.move_left, c.KEY_RIGHT: FinalLogics.move_right}
self.grid_cell = []
self.init_grid()
self.init_matrix()
self.update_grid()
self.mainloop()
def init_grid(self):
background = Frame(self,bg = c.BACKGROUND_COLOR_GAME, width= c.SIZE, height = c.SIZE)
background.grid()
for i in range(c.GRID_LEN):
grid_row = []
for j in range(c.GRID_LEN):
cell = Frame(background,bg= c.BACKGROUND_COLOR_CELLEMPTY, width = c.SIZE / c.GRID_LEN, height = c.SIZE / c.GRID_LEN)
cell.grid(row = i, column = j, padx=c.GRID_PADDING, pady = c.GRID_PADDING)
t = Label(master= cell, text = "",bg = c.BACKGROUND_COLOR_CELLEMPTY, justify= CENTER, font= c.FONT, width = 5, height = 2)
t.grid()
grid_row.append(t)
self.grid_cell.append(grid_row)
def init_matrix(self):
self.matrix = FinalLogics.start_game()
FinalLogics.add_new_2(self.matrix)
FinalLogics.add_new_2(self.matrix)
def update_grid(self):
for i in range(c.GRID_LEN):
for j in range(c.GRID_LEN):
new_number = self.matrix[i][j]
if new_number == 0:
self.grid_cell[i][j].configure(text="", bg= c.BACKGROUND_COLOR_CELLEMPTY)
else:
self.grid_cell[i][j].configure(text=str(new_number),bg = c.BACKGROUND_COLOR_DICT[new_number], fg = c.CELL_COLOR_DICT[new_number])
self.update_idletasks()
def key_down(self,event):
key = repr(event.char)
if key in self.commands:
self.matrix, hasChanged = self.commands[key](self.matrix)
if hasChanged:
FinalLogics.add_new_2(self.matrix)
self.update_grid()
hasChanged = False
if FinalLogics.get_current_state(self.matrix) == "WON":
self.grid_cell[1][1].configure(text = "You", bg= c.BACKGROUND_COLOR_CELLEMPTY)
self.grid_cell[1][2].configure(text = "Win", bg = c.BACKGROUND_COLOR_CELLEMPTY)
if FinalLogics.get_current_state(self.matrix) == "LOST":
self.grid_cell[1][1].configure(text = "You", bg= c.BACKGROUND_COLOR_CELLEMPTY)
self.grid_cell[1][2].configure(text = "Lost", bg = c.BACKGROUND_COLOR_CELLEMPTY)
gamegrid = Game2048()
|
Game2048.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Solarized Light stylesheet
#
#
# This shows an example of "Solarized_Light" styling, which
# tries to replicate the styles of:
#
# - http://ethanschoonover.com/solarized
# - https://github.com/jrnold/ggthemes
# - http://www.pygal.org/en/stable/documentation/builtin_styles.html#light-solarized
#
# and work of:
#
# - https://github.com/tonysyu/mpltools
#
# using all 8 accents of the color palette - starting with blue
#
# ToDo:
#
# - Create alpha values for bar and stacked charts. .33 or .5
# - Apply Layout Rules
#
# +
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10)
with plt.style.context('Solarize_Light2'):
plt.plot(x, np.sin(x) + x + np.random.randn(50))
plt.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
plt.plot(x, np.sin(x) + 3 * x + np.random.randn(50))
plt.plot(x, np.sin(x) + 4 + np.random.randn(50))
plt.plot(x, np.sin(x) + 5 * x + np.random.randn(50))
plt.plot(x, np.sin(x) + 6 * x + np.random.randn(50))
plt.plot(x, np.sin(x) + 7 * x + np.random.randn(50))
plt.plot(x, np.sin(x) + 8 * x + np.random.randn(50))
# Number of accent colors in the color scheme
plt.title('8 Random Lines - Line')
plt.xlabel('x label', fontsize=14)
plt.ylabel('y label', fontsize=14)
plt.show()
|
matplotlib/gallery_jupyter/style_sheets/plot_solarizedlight2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a id=top-page></a>
#
# # <img src="../images/PCAfold-logo.svg" style="height:100px"> Python software to generate, analyze and improve PCA-derived low-dimensional manifolds
#
# #### Authors: **<NAME>**, **<NAME>**, **<NAME>**, **<NAME>**
#
# *Université Libre de Bruxelles*, *The University of Utah*, 2020
#
# This Jupyter notebook contains code presented within the original software publication.
#
# ***
#
# #### Table of contents
#
# - [**Sample code snippets**](#sample)
# - [**Illustrative example**](#example)
# - [**References**](#references)
#
# ***
# <a id=sample></a>
#
# ## Sample code snippets
#
# [**↑ Go to the top**](#top-page)
#
# Code presented in section *2.2 Software functionalities and sample code snippets*.
# #### Data sampling
# +
from PCAfold import DataSampler
import numpy as np
# Generate a dummy data set:
X = np.random.rand(1000,3)
# Generate a dummy vector of cluster classifications:
idx = np.zeros((1000,))
idx[500:800] = 1
idx = idx.astype(int)
# Instantiate DataSampler class object:
selection = DataSampler(idx, idx_test=None, random_seed=100, verbose=True)
# (1) Select equal number of samples from each cluster:
(idx_train, idx_test) = selection.number(20, test_selection_option=1)
# (2) Select equal percentage of samples from each cluster:
(idx_train, idx_test) = selection.percentage(20, test_selection_option=1)
# (3) Select samples manually from each cluster:
(idx_train, idx_test) = selection.manual({0:200, 1:100},
sampling_type='number', test_selection_option=1)
# (4) Select samples at random from each cluster:
(idx_train, idx_test) = selection.random(20, test_selection_option=1)
# Partition the original observations into train and test data:
X_train = X[idx_train,:]
X_test = X[idx_test,:]
# -
# #### Principal Component Analysis (PCA)
# +
from PCAfold import PCA
import numpy as np
# Generate a dummy data set:
X = np.random.rand(100,10)
# Instantiate PCA class object:
pca_X = PCA(X, scaling='auto', n_components=2)
# Eigenvectors:
eigenvectors = pca_X.A
# Eigenvalues:
eigenvalues = pca_X.L
# Principal Components:
principal_components = pca_X.transform(X)
# Reconstruct the data set from the first two Principal Components:
X_rec = pca_X.reconstruct(principal_components)
# -
# <a id=example></a>
# ***
#
# ## Illustrative example
#
# [**↑ Go to the top**](#top-page)
#
# Code presented in section *3. Illustrative example*.
# +
import numpy as np
# Original variables:
X = np.genfromtxt('data-state-space.csv', delimiter=',')
# List of names of the original variables:
variables_names = ['$T$', '$H_2$', '$O_2$', '$O$', '$OH$', '$H_2O$', '$H$', '$HO_2$', '$CO$', '$CO_2$', '$HCO$']
# Corresponding source terms of the original variables:
S_X = np.genfromtxt('data-state-space-sources.csv', delimiter=',')
# +
from PCAfold import preprocess
from PCAfold import reduction
from PCAfold import analysis
# Perform PCA on the data set:
pca_X = reduction.PCA(X, scaling='auto', n_components=2)
# Transform original variables to the PC space:
Z = pca_X.transform(X, nocenter=False)
# Transform sources of the original variables to the PC space:
S_Z = pca_X.transform(S_X, nocenter=True)
# Cluster the data set:
(idx, _) = preprocess.zero_neighborhood_bins(S_Z[:,0], k=4, zero_offset_percentage=2, split_at_zero=True, verbose=True)
# Compute populations of each cluster:
populations = preprocess.get_populations(idx)
# +
# Instantiate DataSampler class object:
sample = preprocess.DataSampler(idx, idx_test=None, random_seed=100, verbose=True)
# Select 2400 samples from each cluster:
(idx_manual, _) = sample.manual({0:2400, 1:2400, 2:2400, 3:2400}, sampling_type='number', test_selection_option=1)
# -
# Perform PCA on a sampled data set according to biasing option 2:
(eigenvalues, eigenvectors, Z_r, S_Z_r, C, D, C_r, D_r) = reduction.pca_on_sampled_data_set(X,
idx_manual,
scaling='auto',
n_components=2,
biasing_option=2,
X_source=S_X)
# +
# Plot the weights change on the first eigenvector:
plt = reduction.analyze_eigenvector_weights_change(np.vstack((pca_X.A[:,0], eigenvectors[:,0])).T,
variables_names,
legend_label=['$\mathbf{X}$', '$\mathbf{X_r}$'],
save_filename=None)
# Plot the initial two-dimensional manifold:
plt = reduction.plot_2d_manifold(Z[:,0],
Z[:,1],
color=X[:,6],
x_label='$Z_{1}$',
y_label='$Z_{2}$',
colorbar_label='$Y_H$ [-]',
save_filename=None)
# Plot the biased two-dimensional manifold:
plt = reduction.plot_2d_manifold(Z_r[:,0],
Z_r[:,1],
color=X[:,6],
x_label='$Z_{r, 1}$',
y_label='$Z_{r, 2}$',
colorbar_label='$Y_H$ [-]',
save_filename=None)
# +
# Bandwidth values:
bandwidth_values = np.logspace(-3.5, 0.5, 25)
# Create dependent variables matrices:
depvars_initial = np.hstack((S_Z, X[:,[6]]))
depvars_biased = np.hstack((S_Z_r, X[:,[6]]))
# Create names for dependent variables:
depvar_names_initial = ['$S_{Z_1}$', '$S_{Z_2}$', '$Y_H$']
depvar_names_biased = ['$S_{Z_{r, 1}}}$', '$S_{Z_{r, 2}}$', '$Y_H$']
# -
# Compute normalized variance quantities on the initial manifold:
variance_data_initial = analysis.compute_normalized_variance(Z,
depvars_initial,
depvar_names=depvar_names_initial,
bandwidth_values=bandwidth_values)
# Compute normalized variance quantities on the biased manifold:
variance_data_biased = analysis.compute_normalized_variance(Z_r,
depvars_biased,
depvar_names=depvar_names_biased,
bandwidth_values=bandwidth_values)
# Plot the comparison of normalized variance quantities:
plt = analysis.plot_normalized_variance_comparison((variance_data_initial, variance_data_biased),
([], []),
('Greys', 'Reds'),
save_filename=None)
# <a id=references></a>
# ***
#
# ## References
#
# [**↑ Go to the top**](#top-page)
#
# The data set used in this notebook has been generated from a steady laminar flamelet model using Spitfire software [1] and a chemical mechanism by Hawkes et al. [2]. The data set can be found in `docs/tutorials` directory.
#
# > [1] [<NAME> - *Spitfire*, 2020](https://github.com/sandialabs/Spitfire)
# >
# > [2] <NAME>, <NAME>, <NAME>, <NAME> - *Scalar mixing in direct numerical simulations of temporally evolving plane jet flames with skeletal co/h2 kinetics*, Proceedings of the combustion institute 31 (1) (2007) 1633–1640
# ***
|
docs/tutorials/SoftwareX-PCAfold-example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, LogisticRegression, Ridge
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, roc_auc_score, precision_recall_curve, auc
rates = 2**np.arange(7)/80
print(rates)
# +
def get_inputs(sm):
seq_len = 220
sm = sm.split()
if len(sm)>218:
print('SMILES is too long ({:d})'.format(len(sm)))
sm = sm[:109]+sm[-109:]
ids = [vocab.stoi.get(token, unk_index) for token in sm]
ids = [sos_index] + ids + [eos_index]
seg = [1]*len(ids)
padding = [pad_index]*(seq_len - len(ids))
ids.extend(padding), seg.extend(padding)
return ids, seg
def get_array(smiles):
x_id, x_seg = [], []
for sm in smiles:
a,b = get_inputs(sm)
x_id.append(a)
x_seg.append(b)
return torch.tensor(x_id), torch.tensor(x_seg)
# -
# ### ECFP4
# +
from rdkit import Chem
from rdkit.Chem import AllChem
def bit2np(bitvector):
bitstring = bitvector.ToBitString()
intmap = map(int, bitstring)
return np.array(list(intmap))
def extract_morgan(smiles, targets):
x,X,y = [],[],[]
for sm,target in zip(smiles,targets):
mol = Chem.MolFromSmiles(sm)
if mol is None:
print(sm)
continue
fp = AllChem.GetMorganFingerprintAsBitVect(mol, 2, 1024) # Morgan (Similar to ECFP4)
x.append(sm)
X.append(bit2np(fp))
y.append(target)
return x,np.array(X),np.array(y)
# -
# ### ST, RNN, BERT
# +
import torch
from pretrain_trfm import TrfmSeq2seq
from pretrain_rnn import RNNSeq2Seq
from bert import BERT
from build_vocab import WordVocab
from utils import split
pad_index = 0
unk_index = 1
eos_index = 2
sos_index = 3
mask_index = 4
vocab = WordVocab.load_vocab('data/vocab.pkl')
trfm = TrfmSeq2seq(len(vocab), 256, len(vocab), 3)
trfm.load_state_dict(torch.load('.save/trfm_12_23000.pkl'))
trfm.eval()
print('Total parameters:', sum(p.numel() for p in trfm.parameters()))
rnn = RNNSeq2Seq(len(vocab), 256, len(vocab), 3)
rnn.load_state_dict(torch.load('.save/seq2seq_1.pkl'))
rnn.eval()
print('Total parameters:', sum(p.numel() for p in rnn.parameters()))
bert = BERT(len(vocab), hidden=256, n_layers=8, attn_heads=8, dropout=0)
bert.load_state_dict(torch.load('../result/chembl/ep00_it010000.pkl'))
bert.eval()
print('Total parameters:', sum(p.numel() for p in bert.parameters()))
# -
# # Evaluation
# +
def evaluate_regression(X, y, rate, n_repeats, model='ridge'):
r2, rmse = np.empty(n_repeats), np.empty(n_repeats)
for i in range(n_repeats):
if model=='ridge':
reg = Ridge()
elif model=='rf':
reg = RandomForestRegressor(n_estimators=10)
else:
raise ValueError('Model "{}" is invalid. Specify "ridge" or "rf".'.format(model))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1-rate)
reg.fit(X_train, y_train)
y_pred = reg.predict(X_test)
r2[i] = r2_score(y_pred, y_test)
rmse[i] = mean_squared_error(y_pred, y_test)**0.5
ret = {}
ret['r2 mean'] = np.mean(r2)
ret['r2 std'] = np.std(r2)
ret['rmse mean'] = np.mean(rmse)
ret['rmse std'] = np.std(rmse)
return ret
def evaluate_classification(X, y, rate, n_repeats, model='ridge'):
roc_aucs, prc_aucs = np.empty(n_repeats), np.empty(n_repeats)
for i in range(n_repeats):
if model=='ridge':
clf = LogisticRegression(penalty='l2', solver='lbfgs', max_iter=1000)
elif model=='rf':
clf = RandomForestClassifier(n_estimators=10)
else:
raise ValueError('Model "{}" is invalid. Specify "ridge" or "rf".'.format(model))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1-rate, stratify=y)
clf.fit(X_train, y_train)
y_score = clf.predict_proba(X_test)
roc_aucs[i] = roc_auc_score(y_test, y_score[:,1])
precision, recall, thresholds = precision_recall_curve(y_test, y_score[:,1])
prc_aucs[i] = auc(recall, precision)
ret = {}
ret['roc_auc mean'] = np.mean(roc_aucs)
ret['roc_auc std'] = np.std(roc_aucs)
ret['prc_auc mean'] = np.mean(prc_aucs)
ret['prc_auc std'] = np.std(prc_aucs)
return ret
def evaluate_classification_multi(X, rate, n_repeats, model='ridge'):
roc_aucs, prc_aucs = np.empty(n_repeats), np.empty(n_repeats)
for i in range(n_repeats):
_roc_aucs, _prc_aucs = np.empty(len(KEYS)), np.empty(len(KEYS))
for j,key in enumerate(KEYS):
X_dr = X[df[key].notna()]
y_dr = df[key].dropna().values
if model=='ridge':
clf = LogisticRegression(penalty='l2', solver='lbfgs', max_iter=1000)
elif model=='rf':
clf = RandomForestClassifier(n_estimators=10)
else:
raise ValueError('Model "{}" is invalid. Specify "ridge" or "rf".'.format(model))
X_train, X_test, y_train, y_test = train_test_split(X_dr, y_dr, test_size=1-rate, stratify=y_dr)
clf.fit(X_train, y_train)
y_score = clf.predict_proba(X_test)
_roc_aucs[j] = roc_auc_score(y_test, y_score[:,1])
precision, recall, thresholds = precision_recall_curve(y_test, y_score[:,1])
_prc_aucs[j] = auc(recall, precision)
roc_aucs[i] = np.mean(_roc_aucs)
prc_aucs[i] = np.mean(_prc_aucs)
ret = {}
ret['roc_auc mean'] = np.mean(roc_aucs)
ret['roc_auc std'] = np.mean(np.std(roc_aucs, axis=0))
ret['prc_auc mean'] = np.mean(prc_aucs)
ret['prc_auc std'] = np.mean(np.std(prc_aucs, axis=0))
return ret
# -
# ## ESOL
df = pd.read_csv('data/esol.csv')
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['smiles'].values]
xid, xseg = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
# Ridge
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['measured log solubility in mols per litre'].values, rate, 3, model='ridge')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# RF
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['measured log solubility in mols per litre'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ### ECFP
x,X,y = extract_morgan(df['smiles'].values,df['measured log solubility in mols per litre'].values)
print(len(X), len(y))
# Ridge
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['measured log solubility in mols per litre'].values, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# RF
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['measured log solubility in mols per litre'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
# Ridge
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['measured log solubility in mols per litre'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# RF
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['measured log solubility in mols per litre'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ## FreeSolv
df = pd.read_csv('data/freesolv.csv')
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['smiles'].values]
xid, xseg = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['expt'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['expt'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ### ECFP
x,X,y = extract_morgan(df['smiles'].values, df['expt'].values)
print(len(X), len(y))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['expt'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['expt'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['expt'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['expt'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ## Lipo
df = pd.read_csv('data/lipo.csv')
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['smiles'].values]
xid, xseg = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['exp'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['exp'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ### ECFP
x,X,y = extract_morgan(df['smiles'].values, df['exp'].values)
print(len(X), len(y))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['exp'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['exp'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['exp'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_regression(X, df['exp'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['rmse mean'])
print(np.mean(scores))
# ## HIV
df = pd.read_csv('data/hiv.csv')
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['smiles'].values]
xid, xseg = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['HIV_active'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['HIV_active'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### ECFP
x,X,_ = extract_morgan(df['smiles'].values,df['smiles'].values)
print(len(X))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['HIV_active'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['HIV_active'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['HIV_active'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['HIV_active'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ## BACE
df = pd.read_csv('data/bace.csv')
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['mol'].values]
xid, _ = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['Class'].values, rate, 20)
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['Class'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### ECFP
x,X,y = extract_morgan(df['mol'].values,df['Class'].values)
print(len(X), len(y))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['Class'].values, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['Class'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['mol'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['Class'].values, rate, 20, regularizer=True)
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['Class'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ## BBBP
df = pd.read_csv('data/bbbp.csv')
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['p_np'].values, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['p_np'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### ECFP
x,X,y = extract_morgan(df['smiles'].values,df['p_np'].values)
print(len(X), len(y))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, y, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, y, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['p_np'].values, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification(X, df['p_np'].values, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ## Tox21
df = pd.read_csv('data/tox21.csv')
KEYS = df.keys()[:-2]
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### ECFP
x,X,_ = extract_morgan(df['smiles'].values,df['smiles'].values)
print(len(X))
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ## ClinTox
df = pd.read_csv('data/clintox.csv')
KEYS = df.keys()[1:]
print(df.shape)
df.head()
# ### ST
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = trfm.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### ECFP
# +
def extract_ecfp_multi(smiles, targets):
x,X,y = [],[],[]
for sm,target in zip(smiles,targets):
mol = Chem.MolFromSmiles(sm)
if mol is None:
print(sm)
continue
fp = AllChem.GetMorganFingerprintAsBitVect(mol, 2, 1024) # Morgan (Similar to ECFP4)
x.append(sm)
X.append(bit2np(fp))
y.append(target)
return x,np.array(X),np.array(y)
def evaluate_mlp_classification_multi(X, y, rate, n_repeats, model='ridge'):
auc = np.empty(n_repeats)
for i in range(n_repeats):
_auc = np.empty(len(KEYS))
for j,key in enumerate(KEYS):
X_dr = X
y_dr = y[:,j]
if model=='ridge':
clf = LogisticRegression(penalty='l2', solver='lbfgs', max_iter=1000)
elif model=='rf':
clf = RandomForestClassifier(n_estimators=10)
else:
raise ValueError('Model "{}" is invalid. Specify "ridge" or "rf".'.format(model))
X_train, X_test, y_train, y_test = train_test_split(X_dr, y_dr, test_size=1-rate, stratify=y_dr)
clf.fit(X_train, y_train)
y_score = clf.predict_proba(X_test)
_auc[j] = roc_auc_score(y_test, y_score[:,1])
auc[i] = np.mean(_auc)
ret = {}
ret['roc_auc mean'] = np.mean(auc)
ret['roc_auc std'] = np.std(auc)
return ret
# -
x,X,y = extract_ecfp_multi(df['smiles'].values, np.array([df['FDA_APPROVED'].values, df['CT_TOX'].values]).T)
print(len(X))
scores = []
for rate in rates:
score_dic = evaluate_mlp_classification_multi(X,y, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_mlp_classification_multi(X,y, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
# ### RNN
x_split = [split(sm) for sm in df['smiles'].values]
xid, _ = get_array(x_split)
X = rnn.encode(torch.t(xid))
print(X.shape)
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='ridge')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
scores = []
for rate in rates:
score_dic = evaluate_classification_multi(X, rate, 20, model='rf')
print(rate, score_dic)
scores.append(score_dic['roc_auc mean'])
print(np.mean(scores))
|
experiments/ablation_2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS-109A Introduction to Data Science
#
# ## Lab 9: Decision Trees (Part 1 of 2): Classification, Regression, Bagging, Random Forests
#
# **Harvard University**<br/>
# **Fall 2019**<br/>
# **Instructors:** <NAME>, <NAME>, and <NAME><br/>
# **Lab Instructors:** <NAME> and <NAME><br/>
# **Authors:** <NAME>, <NAME>, <NAME>
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
# ## Learning Goals
#
# The goal of this lab is for students to:
#
# <ul>
# <li>Understand where Decision Trees fit into the larger picture of this class and other models</li>
# <li>Understand what Decision Trees are and why we would care to use them</li>
# <li>How decision trees work</li>
# <li>Feel comfortable running sklearn's implementation of a decision tree</li>
# <li>Understand the concepts of bagging and random forests</li>
# </ul>
# imports
# %matplotlib inline
import numpy as np
import scipy as sp
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.model_selection import cross_val_score
from sklearn.utils import resample
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn.apionly as sns
# ## Background
#
# Let's do a high-level recap of what we've learned in this course so far:
#
# Say we have input data $X = (X_1, X_2, ..., X_n)$ and corresponding class labels $Y = (Y_1, Y_2, ..., Y_n)$ where $n$ represents the number of observations/instances (i.e., unique samples). Much of statistical learning concerns trying to model this relationship between our data's $X$ and $Y$. In particular, we assert that the $Y$'s were produced/generated by some underlying function $f(X)$, and that there is inevitably some noise and systematic, implicit bias and error $\epsilon$ that cannot be captured by any $f(X)$. Thus, we have:
#
# $Y = f(X) + \epsilon$
#
# Statistical learning concerns either **prediction** or **inference**:
#
# **Prediction:** concerns trying to learn a function $\hat{f}(X)$ that is as close as possible to the true function $f(X)$. This allows us to estimate $Y$ values for any new input data $X$.
#
# **Inference:** concerns trying to understand/model the _relationship_ between $X$ and $Y$, effectively learning how the data was generated.
#
# Independent of this, if you have access to gold truth labels $Y$, and you make use of them for your modelling, then you are working on a **supervised** learning task. If you do not have or make use of $Y$ values, and you are only concerned with the input data $X$, you are working on an **unsupervised** learning task.
#
# <br>
# <div class="exercise"><b>Q1:</b> Using the above terms, what types of problems are linear regression, logistic regression, and PCA?</div>
# # %load solutions/q1.txt
# solution discussed in lab.
Linear Regression is a supervised, prediction task (in particular, a regression task -- trying to predict a numeric value).
Logistic Regression is a supervised, prediction task (in particular, a classification task -- trying to pick the probability of a category)
PCA is unsupervised and isn't a prediction or inference task. It's independent, as it's merely transforming the data by reducing its dimensions. Afterwards, the data could be used for any model.
# <br>
# <div class="exercise"><b>Q2:</b> What is a decision tree? Why do we care to make a decision tree?</div>
# +
# discussed in lab.
# -
# ## Understanding Decision Trees
#
# My goal is for none of the topics we learn in this class to seem like nebulus concepts or black-boxes of magic. In this course, it's important to understand the models that you can use to help you with your data, and this includes not only knowing how to invoke these as tools within Python libraries (e.g., ``sklearn``, ``statsmodels``), but to have an understanding of what each model is actually doing 'under the hood' -- how it actually works -- as this provides insights into why you should use one model vs another, and how you could adjust models and invent new ones!
#
#
# ### Entropy (aka Uncertainty)
#
# Remember, in the last lab, we mentioned that in data science and machine learning, our models are often just finding patterns in the data. For example, for classification, it is best when our data is separable by their $Y$ class lables (e.g., cancerous or benign). That is, hopefully the $X$ values for one class label (e.g., cancerous) is disjoint and separated from the $X$ values that correspond to another class label (e.g., benign). If so, our model would be able to easily discern if a given, new piece of data corresponds to the cancerous label or benign label, based on its $X$ values. If the data is not easily separable (i.e., the $X$ values corresponding to cancer looks very similar to $X$ values corresponding to benign), then our task is difficult and perhaps impossible. Along these lines, we can measure this element in terms of how messy/confusable/_uncertain_ a collection of data is.
#
# In the 1870s, physicists introduced a term ``Gibbs Entropy``, which was useful in statistical thermodynamics, as it effectively measured uncertainty. By the late 1920s, the foundational work in Information Theory had begun; pioneers <NAME> and <NAME> conducted phenomenal work which paved the way for computation at large -- they heavily influenced the creation of computer science, and their work is still seen in modern day computers. Information theory concerns [entropy.](https://en.wikipedia.org/wiki/Entropy_(information_theory)) So let's look at an example to concretely address what entropy is (the information theoretic version of it).
#
# Say that we have a fair coin $X$, and each coin fliip is an observation. The coin is equally likely to yield heads or tails. The uncertainty is very high. In fact, it's the highest possible, as it's truly a 50/50 chance of either. Let $H(X)$ represent the entropy of $X$. Per the graphic below, we see that entropy is in fact highest when the probabilities of a 2-class variable are a 50/50 chance.
#
# <div>
# <img src="coin_flip.png" width="300"/>
# </div>
#
# If we had a cheating coin, whereby it was guaranteed to always be a head (or a tail), then our entropy would be 0, as there is no **uncertainty** about its outcome. Again, this term, entropy, predates decision trees and has vast applications. Alright, so we can see what entropy is measuring (the uncertainty), but how was it actually calculated?
#
# #### Definition:
# Entropy factors in _all_ possible values/classes of a random variable (log base 2):
# <div>
# <img src="entropy_definition.svg" width="250"/>
# </div>
#
# #### Fair-Coin Example
# In our fair coin example, we only have 2 classes, both of which have a probability of 1/2. So, to calculate the overall entropy of the fair coin, we have Entropy(1+, 1-) =
# <p>
# <center>
# $H(X)$ = -1 * (P(coin=heads)*log(P(coin=heads)) + P(coin=tails)*log(P(coin=tails)))
# </center>
#
# <p>
# <center>
# $ = -1 * (\frac{1}{2}log(\frac{1}{2}) + \frac{1}{2}log(\frac{1}{2}))$
# </center>
#
# <p>
# <center>
# $ = -1 * (\frac{1}{2}*-1) + \frac{1}{2}*-1)$
# </center>
#
# <p>
# <center>
# $ = -1 * (-\frac{1}{2} + -\frac{1}{2})$
# </center>
#
# <p>
# <center>
# $ = -1*-1 = 1$
# </center>
#
#
# ### Worked Example
#
# Let's say that we have a small, 14-observation dataset that concerns if we will play tennis on a given day or not (Play Tennis will be our output $Y$), based on 4 features of the current weather:
# <p>
# <div>
# <img src="play_tennis_dataset.png" width="500"/>
# </div>
# <p>
# Completely independent of the features, we can calculate the overall entropy of playing tennis, Entropy for (9+, 5-) examples =
#
# <p>
# <center>
# $H(X) = -1 * (P$(play_tennis=yes)*log(P(play_tennis=yes)) + P(play_tennis=no)*log(P(play_tennis=no)))
# </center>
#
# <p>
# <center>
# $ = -\frac{9}{14}log(\frac{9}{14}) - \frac{5}{14}log(\frac{5}{14}) = 0.94$
# </center>
#
# Okay, **0.94** is pretty horrible, as it's close to 1, which is the worst possible value. This means that a priori, if we use no features, it's hard to predict if we will play tennis or not. There's a lot of uncertainty (aka entropy). To improve this, could we segment our data in such a way that it's more clear if we will play tennis or not (i.e., by more clear, I mean we will have lower uncertainty... lower entropy).
#
# Let's start with looking at the ``Wind`` feature. There are 2 possible values for the Wind attribute, **weak** or **strong.** If we were to look at the subset of data that has weak wind, we see that there are 8 data samples (6 are 'Yes' for Play Tennis, 2 have 'No' for Play Tennis). Hmm, so if we know that the Wind is weak, it helps inform us that there's a 6/8 (75%) chance that we will Play Tennis. Let's put this in terms of entropy:
#
# When we look at ONLY the Wind is Weak subset of data, we have a Play Tennis entropy for (6+, 2-) examples, which calculates to:
# <p>
# <center>
# $H(X) = -1 * (P($play_tennis=yes$)*log(P$(play_tennis=yes$)) + P($play_tennis$=$no$)*log(P($play_tennis$=no)))$
# </center>
#
# <p>
# <center>
# $ = -\frac{6}{8}log(\frac{6}{8}) - \frac{2}{8}log(\frac{2}{8}) = 0.811$
# </center>
#
# A value of 0.811 may seem sadly high, still, but our calculation was correct. If you reference the figure above that shows the entropy of a fair coin, we see that having a probability of 75% does in fact yield an entropy of 0.811.
#
# We're only looking at a subset of our data though (the subset for Wind is Weak). We now need to look at the rest of our data (the subset for Wind is Strong). When the Wind is Strong, we have 6 data points: 3 have Play Tennis is Yes, and 3 are No). In short-hand notation, we have (3+, 3-), which is a 0.5 probability, and we know already that this yields an Entropy of 1.
#
# When looking at this possible division of separating our data according to the value of Wind, the hope was that we'd have very low entropy in each subset of data. Imagine if the Wind attribute perfectly aligned with Playing Tennis or not (the values were identical). In that case, we would have an Entropy of 0 (no uncertainty), and thus, it would be INCREDIBLY useful to predict playing tennis or not based on the Wind attribute (it would tell us the exact answer).
#
# We saw that the Wind attribute didn't yield an entropy of 0; its two classes (weak and strong) had an entropy of 0.811 and 1, respectively. Is Wind a useful feature for us then? In order quantitatively measure its usefulness, we can use the entropy to calculate ``Information Gain``, which we saw in Lecture 15 on Slide 40:
#
# <p>
# <center>
# $Gain(S) = H(S) - \sum_{i}\frac{|S_{i}|}{|S|}*H(S_{i})$
# </center>
#
# Let $S$ represent our current data, and each $S_{i}$ is a subset of the data split according to each of the possible values. So, when considering splitting on Wind, our Information Gain is:
#
# <p>
# <center>
# $Gain($alldata)$ = H($alldata$) - \frac{|S_{windweak}|}{|S|}H(S_{windweak}) - \frac{|S_{windstrong}|}{|S|}H(S_{windstrong})$
# </center>
#
# <p>
# <center>
# $ = 0.94 - \frac{8}{14}0.811 - \frac{6}{14}1.00 = 0.048$
# </center>
#
# Okay, using Wind as a feature to split our data yields an Information Gain of 0.048. That looks like a low value. We want a high value because gain is good (we want to separate our data in a way that the increases our information). Is 0.048 bad? It all depends on the dataset.
#
# <p>
# <div class="exercise"><b>Q3:</b> Using our entire 14-observation dataset, calculate the Information Gain for the other 3 remaining features (Outlook, Temperature, Humidity). What are their values and which ones gives us the most information gain?</div>
# # %load solutions/q3.txt
# This was the main lab exercise.
InformationGain(outlook) = 0.246
InformationGain(humidity) = 0.151
InformationGain((temp) = 0.029
# <div class="exercise"><b>Q4:</b> Now that we know which feature provides the most information gain, how should we use it to construct a decision tree? Let's start the construction of our tree and repeat the process of Q3 one more time.</div>
# +
# # %load solutions/q4.txt
Discussed in Lab.
The node that yields the highest Information Gain (outlook) should become our root node, and it should have 3 edges, 1 for each of its possible values (sunny, overcast, rain).
For each of these children, we will focus on the just the corresponding subset of data. For example, for the 'sunny' child, we will only look at the subset of data that has outlook being sunny.
When looking at this subset of data, we need to re-calculate the Information Gain for the remaining features (Humidity, Wind, Temp). Whichever has the highest Information Gain will become
'Sunny's' child. This process continues until our stopping criterion is met.
# -
# <div class="exercise"><b>Q5:</b> When should we stop this process?</div>
# # %load solutions/q5.txt
Discussed in Lab.
You get to decide. Possible options include:
- all data has the same output (entropy of 0)
- no features remain for the current path of the tree
- depth is N
- Information Gain < alpha
# <div class="exercise"><b>Q6:</b> Should we standardize or normalize our features? Both? Neither?</div>
# # %load solutions/q6.txt
You do not need to standardize or normalize the data because decision trees are not trying to fit the data to a particular line;
it tries to draw decision boundaries on a per feature basis, so it's okay if the values for one feature are drastically different from the values of a different features.
That is, the features are not being combined in our traditional approach. Instead, for each node in our tree, a given feature is being evaluated for making a decision independent of the other features.
# <div class="exercise"><b>Q7:</b> What if we have outliers? How sensitive is our Decision Tree to outliers? Why?</div>
# # %load solutions/q7.txt
Related to Q6, it's okay if we have outliers. Decision Trees are robust (not sensitive) to outliers because it's merely drawing separation lines on a per feature basis.
It's okay if some values are extremely far from others, as we are effectively trying to figure out how to separate our data, not fit a line that represents all of the data.
# #### Connection to Lecture
# In Lecture 16, Pavlos started by presenting the tricky graph below which depicts a dataset with just 2 features: longitude and latitude.
#
# <div>
# <img src="green_white_data.png" width="300"/>
# </div>
#
# By drawing a straight line to separate our data, we would be doing the same exact process that we are doing here with our Play Tennis dataset. In our Play Tennis example, we are trying to segment our data into bins according to the possible _categories_ that a feature can be. In the lecture example (pictured above), we have continuous data, not discrete categories, so we have an infinite number of thresholds by which to segment our data.
#
# <p>
# <div class="exercise"><b>Q8:</b> How is it possible to segment continuous-valued data, since there are infinite number of possible splits? Do we try 1,000,000 possible values to split by? 100?</div>
# # %load solutions/q8.txt
Different algorithms approach this differently, as there is no gold-standard approach. No approach should try an unwieldy number of possible threshold points though.
At most, you could imagine trying N-1 threshold, where N is the number of distinct/unique values for a given feature. Each threshold could be the median between any two
successive points after sorting the values. For example, if we only had 4 distinct values (-4, -2, 6, 10), then you could try -3, 2, 8.
This could still lead to trying too many thresholds. It is more common to bin your values in N bins (picture a histogram). Then, you can pick your thresholds based on the N bins.
# ### Summary:
#
# To build a decision tree:
# <ul>
# <li>Start with an empty tree and some data $X$</li>
# <li>Decide what your splitting criterion will be (e.g., Gini, Entropy, etc)</li>
# <li>Decide what your what your stopping criterion will be, or if you'll develop a large tree and prune (pruning is covered in Lecture 15, slides 41 - 54)</li>
# <li>Build the tree in a greedy manner, and if you have multiple hyperparameters, use cross-validation to determine the best values</li>
# </ul>
#
#
# ## Sklearn's Implementation
#
# Our beloved `sklearn` library has implementations of DecisionTrees, so let's practice using it.
#
# First, let's load our Play Tennis data `(../data/play_tennis.csv")`:
tennis_df = pd.read_csv("../data/play_tennis.csv")
tennis_df = pd.get_dummies(tennis_df, columns=['outlook', 'temp', 'humidity', 'windy'])
tennis_df
# Normally, in real situations, we'd perform EDA. However, for this tiny dataset, we see there are no missing values, and we do not care if there is collinearity or outliers, as Decision Trees are robust to such.
# separate our data into X and Y portions
x_train = tennis_df.iloc[:, tennis_df.columns != 'play'].values
y_train = tennis_df['play'].values
# We can build a DecisionTree classifier as follows:
dt = DecisionTreeClassifier().fit(x_train, y_train)
tree_vis = tree.plot_tree(dt, filled=True)
# <p>
# <div class="exercise"><b>Q9:</b> Is this tree identical to what we constructed above? If not, what differs in sklearn's implementation?</div>
# # %load solutions/q9.txt
It is not identical. Sklearn's DecisionTree class, by default, doesn't handle categorical data, so we need to do one-hot encoding to handle it.
Further, its default splitting criterion is Gini, whereas we used Entropy.
# In the above example, we did not use the tree to do any classification. Our data was too small to consider such.
#
# Let's turn to a different dataset:
#
# ## 2016 Election Data
# We will be attempting to predict the presidential election results (at the county level) from 2016, measured as 'votergap' = (trump - clinton) in percentage points, based mostly on demographic features of those counties. Let's quick take a peak at the data:
elect_df = pd.read_csv("../data/county_level_election.csv")
elect_df.head()
# split 80/20 train-test
X = elect_df[['population','hispanic','minority','female','unemployed','income','nodegree','bachelor','inactivity','obesity','density','cancer']]
response = elect_df['votergap']
Xtrain, Xtest, ytrain, ytest = train_test_split(X,response,test_size=0.2)
plt.hist(ytrain)
Xtrain.hist(column=['minority', 'population','hispanic','female']);
print(elect_df.shape)
print(Xtrain.shape)
print(Xtest.shape)
# ## Regression Trees
#
# We will start by using a simple Decision Tree Regressor to predict votergap. We'll run a few of these models without any cross-validation or 'regularization', just to illustrate what is going on.
#
# This is what you ought to keep in mind about decision trees.
#
# from the docs:
# ```
# max_depth : int or None, optional (default=None)
# The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
# min_samples_split : int, float, optional (default=2)
# ```
#
# - The deeper the tree, the more prone you are to overfitting.
# - The smaller `min_samples_split`, the more the overfitting. One may use `min_samples_leaf` instead. More samples per leaf, the higher the bias.
from sklearn.tree import DecisionTreeRegressor
x = Xtrain['minority'].values
o = np.argsort(x)
x = x[o]
y = ytrain.values[o]
plt.plot(x,y, '.');
plt.plot(np.log(x),y, '.'); # log scale
# <p>
# <div class="exercise"><b>Q10:</b> Which of the two versions of 'minority' would be a better choice to use as a predictor for prediction?</div>
#
# # %load solutions/q10.txt
They would be equally useful. The log-scale is easier to visualize, so we will use it.
plt.plot(np.log(x),y,'.')
xx = np.log(x).reshape(-1,1)
for i in [1,2]:
dtree = DecisionTreeRegressor(max_depth=i)
dtree.fit(xx, y)
plt.plot(np.log(x), dtree.predict(xx), label=str(i), alpha=1-i/10, lw=4)
plt.legend();
plt.plot(np.log(x),y,'.')
xx = np.log(x).reshape(-1,1)
for i in [500,200,100,20]:
dtree = DecisionTreeRegressor(min_samples_split=i)
dtree.fit(xx, y)
plt.plot(np.log(x), dtree.predict(xx), label=str(i), alpha=0.8, lw=4)
plt.legend();we
plt.plot(np.log(x),y,'.')
xx = np.log(x).reshape(-1,1)
for i in [500,200,100,20]:
dtree = DecisionTreeRegressor(max_depth=6, min_samples_split=i)
dtree.fit(xx, y)
plt.plot(np.log(x), dtree.predict(xx), label=str(i), alpha=0.8, lw=4)
plt.legend();
#let's also include logminority as a predictor going forward
xtemp = np.log(Xtrain['minority'].values)
Xtrain = Xtrain.assign(logminority = xtemp)
Xtest = Xtest.assign(logminority = np.log(Xtest['minority'].values))
Xtrain.head()
# Ok with this discussion in mind, lets improve this model by Bagging.
# ## Bootstrap-Aggregating (called Bagging)
#
# <p>
# <div class="exercise"><b>Q11:</b> Class poll: When did the movie Titanic come out?</div>
#
# +
# # %load solutions/q11.txt
This was intended to be a class activity to illustrate the idea and effectiveness of bagging.
Basically, there is power in having many people do a particular task. For example, most people cannot recall the exact year that a particular movie was released. Let's use the movie Titanic as an example.
Perhaps you'd guess it was 1990? Some would guess 2000? But, if we polled enough people, by the law of large numbers, we'd probably see a pretty good estimate of the correct answer (1997).
# -
# The basic idea:
# - A Single Decision tree is likely to overfit.
# - So lets introduce replication through Bootstrap sampling.
# - **Bagging** uses bootstrap resampling to create different training datasets. This way each training will give us a different tree.
# - Added bonus: the left off points can be used to as a natural "validation" set, so no need to
# - Since we have many trees that we will **average over for prediction**, we can choose a large `max_depth` and we are ok as we will rely on the law of large numbers to shrink this large variance, low bias approach for each individual tree.
# +
from sklearn.utils import resample
ntrees = 500
estimators = []
R2s = []
yhats_test = np.zeros((Xtest.shape[0], ntrees))
plt.plot(np.log(x),y,'.')
for i in range(ntrees):
simpletree = DecisionTreeRegressor(max_depth=3)
boot_xx, boot_y = resample(Xtrain[['logminority']], ytrain)
estimators = np.append(estimators,simpletree.fit(boot_xx, boot_y))
R2s = np.append(R2s,simpletree.score(Xtest[['logminority']], ytest))
yhats_test[:,i] = simpletree.predict(Xtest[['logminority']])
plt.plot(np.log(x), simpletree.predict(np.log(x).reshape(-1,1)), 'red', alpha=0.05)
# -
yhats_test.shape
# <div class="exercise">**Exercise 2**</div>
# 1. Edit the code below (which is just copied from above) to refit many bagged trees on the entire xtrain feature set (without the plot...lots of predictors now so difficult to plot).
# 2. Summarize how each of the separate trees performed (both numerically and visually) using $R^2$ as the metric. How do they perform on average?
# 3. Combine the trees into one prediction and evaluate it using $R^2$.
# 4. Briefly discuss the results. How will the results above change if 'max_depth=4' is increased? What if it is decreased?
# +
from sklearn.metrics import r2_score
ntrees = 500
estimators = []
R2s = []
yhats_test = np.zeros((Xtest.shape[0], ntrees))
for i in range(ntrees):
dtree = DecisionTreeRegressor(max_depth=3)
boot_xx, boot_y = resample(Xtrain[['logminority']], ytrain)
estimators = np.append(estimators,dtree.fit(boot_xx, boot_y))
R2s = np.append(R2s,dtree.score(Xtest[['logminority']], ytest))
yhats_test[:,i] = dtree.predict(Xtest[['logminority']])
# your code here
# -
# #### Your answer here
# <hr style='height:2px'>
# ## Random Forests
#
# What's the basic idea?
#
# Bagging alone is not enough randomization, because even after bootstrapping, we are mainly training on the same data points using the same variablesn, and will retain much of the overfitting.
#
# So we will build each tree by splitting on "random" subset of predictors at each split (hence, each is a 'random tree'). This can't be done in with just one predcitor, but with more predictors we can choose what predictors to split on randomly and how many to do this on. Then we combine many 'random trees' together by averaging their predictions, and this gets us a forest of random trees: a **random forest**.
# Below we create a hyper-param Grid. We are preparing to use the bootstrap points not used in training for validation.
#
# ```
# max_features : int, float, string or None, optional (default=”auto”)
# - The number of features to consider when looking for the best split.
# ```
#
# - `max_features`: Default splits on all the features and is probably prone to overfitting. You'll want to validate on this.
# - You can "validate" on the trees `n_estimators` as well but many a times you will just look for the plateau in the trees as seen below.
# - From decision trees you get the `max_depth`, `min_samples_split`, and `min_samples_leaf` as well but you might as well leave those at defaults to get a maximally expanded tree.
from sklearn.ensemble import RandomForestRegressor
# +
# code from
# Adventures in scikit-learn's Random Forest by <NAME>
from itertools import product
from collections import OrderedDict
param_dict = OrderedDict(
n_estimators = [400, 600, 800],
max_features = [0.2, 0.4, 0.6, 0.8]
)
param_dict.values()
# -
# ### Using the OOB score.
#
# We have been putting "validate" in quotes. This is because the bootstrap gives us left-over points! So we'll now engage in our very own version of a grid-search, done over the out-of-bag scores that `sklearn` gives us for free
from itertools import product
# +
#make sure ytrain is the correct data type...in case you have warnings
#print(yytrain.shape,ytrain.shape,Xtrain.shape)
#ytrain = np.ravel(ytrain)
#Let's Cross-val. on the two 'hyperparameters' we based our grid on earlier
results = {}
estimators= {}
for ntrees, maxf in product(*param_dict.values()):
params = (ntrees, maxf)
est = RandomForestRegressor(oob_score=True,
n_estimators=ntrees, max_features=maxf, max_depth=50, n_jobs=-1)
est.fit(Xtrain, ytrain)
results[params] = est.oob_score_
estimators[params] = est
outparams = max(results, key = results.get)
outparams
# -
rf1 = estimators[outparams]
rf1
results
rf1.score(Xtest, ytest)
# Finally you can find the **feature importance** of each predictor in this random forest model. Whenever a feature is used in a tree in the forest, the algorithm will log the decrease in the splitting criterion (such as gini). This is accumulated over all trees and reported in `est.feature_importances_`
pd.Series(rf1.feature_importances_,index=list(Xtrain)).sort_values().plot(kind="barh")
# Since our response isn't very symmetric, we may want to suppress outliers by using the `mean_absolute_error` instead.
from sklearn.metrics import mean_absolute_error
mean_absolute_error(ytest, rf1.predict(Xtest))
|
content/labs/lab09/notes/cs109a_Lab9_Decision_Trees.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
pip install ontospy
import ontospy
model = ontospy.Ontospy("http://vocab.gtfs.org/terms#", verbose=True)
for x in model.all_classes:
print(x.qname)
for x in model.all_properties:
print(x.qname)
print(x.bestLabel(x))
model.printClassTree()
c1 = _[1]
print(c1.rdf_source())
print(model.rdf_source())
|
src/Notebooks/ontospy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import ipywidgets as widgets
import pandas as pd
import numpy as np
import psycopg2
import bqplot
import os
os.environ['POSTGRES_USER']
os.environ['POSTGRES_PASS']
host, port = os.environ['POSTGRES_ADDR'].split(':')
connection_config = {
'user': os.environ['POSTGRES_USER'],
'password': os.environ['POSTGRES_PASS'],
'host': host,
'port': port,
'database': 'postgres',
}
connection_config
# +
sql = r'SELECT * FROM game_events;'
with psycopg2.connect(**connection_config) as conn:
df = pd.read_sql_query(sql, conn)
pd.set_option("display.max_colwidth", 120)
df[df.iloc[:, -2] == 1]
# +
table_list = ['game_events', 'games', 'guilds', 'users', 'users_games']
df_dict_ = {}
for tbl in table_list:
sql = f'SELECT * FROM {tbl};'
with psycopg2.connect(**connection_config) as conn:
df_dict_[tbl] = pd.read_sql_query(sql, conn)
df_dict_
# -
games = df_dict_['games']
games[(games.loc[:, 'win_type'] == 0) | (games.loc[:, 'win_type'] == 1) | (games.loc[:, 'win_type'] == 6)].shape
games.shape
47-24
24/(24+23)
#
# [automuteus/storage/statsQueries.go](https://github.com/denverquane/automuteus/blob/7887a1b2b30d20e0a64fec69c48d04cff768066b/storage/statsQueries.go)
#
# - Crewmate win: win_type = 0 or 1 or 6
# - Imposter win: win_type = 2 or 3 or 4 or 5
#
# ```go
# if role == game.CrewmateRole {
# err = pgxscan.Get(context.Background(), psqlInterface.Pool, &r, "SELECT COUNT(*) FROM games WHERE guild_id=$1 AND (win_type=0 OR win_type=1 OR win_type=6)", gid)
# } else {
# err = pgxscan.Get(context.Background(), psqlInterface.Pool, &r, "SELECT COUNT(*) FROM games WHERE guild_id=$1 AND (win_type=2 OR win_type=3 OR win_type=4 OR win_type=5)", gid)
# ```
|
jupyter/work/automuteus_analytics.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import scipy.integrate as integrate
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import scipy.special as sp
from matplotlib import cm, colors
from mpl_toolkits.mplot3d import Axes3D
import scipy.constants as cte
# ## Ecuacion Angular
# $ Y_{l}^{k}(\theta,\phi)=\sqrt{\frac{(2l+1)(l-k)!}{4\pi(l+k)!}}e^{ik\theta}P_{l}^{K}(cos(\theta))\ \ \text{, siendo}\ \ A_{l}^{k} = \sqrt{\frac{(2l+1)(l-k)!}{4\pi(l+k)!}}$
#
# ## Ecuación radial
# $R(r)_{nl}=D_{nl}J_{l}(sr)$
#
# ## Coeficientes
# $D_{nl}=\frac{1}{\int_{0}^{\infty} | J_{l}(sr) |r^{2}dr}$
#
# ## valor de s
# $s=\frac{\sqrt{2mE}}{\hbar}$ o $s=\frac{\beta_{nl}}{a}$
# ## Energia
#
# $E_{nl}=\frac{\hbar^{2}}{2ma^{2}}\beta_{nl}^{2}$ siendo $\beta_{nl}$ las raices de la funcion especial de Besell, osea $J_{l}(sa)=0$
def FUN_BESSEL(a,n,l):
# n > 0
# l => 0
#GRAFICA DE LA FUNCION DE BESELL Y SUS 'n' RAICES
#----------------------------------------------------------------------------------------------------------------------
r = np.linspace(0,a,100)
J_l = sp.jv(l, r) #Función de Besell
B_nl =sp.jn_zeros(l, n) # Numero de Raices de la funcion de Besell de grado l
fig = plt.figure(figsize=(8,6))
plt.plot(r,J_l)
for i in range(len(B_nl)):
plt.scatter(B_nl[i],0)
FUN_BESSEL(20,1,2)
# +
def FuncionRadial(a,n,l):
r = np.linspace(0,a,100)
J_l = sp.jv(l, r) #Función de Besell
B_nl =sp.jn_zeros(l, n) # Numero de Raices de la funcion de Besell de grado l
#CALCULO DE LAS DIFERENTES VARIABLES COMO LO SON LA (E, s, D)
#---------------------------------------------------------------------------------------------------------------
E_nl=[] # Lista que contiene los valores de la eneergia para las n raices y l grado de la funcion de Besell
for i in range(len(B_nl)):
E = (B_nl[i]/a)**2 # h^2/2m =1
E_nl.append(E)
s_nl=[] #lista que contiene los valores de "s"
for i in range(len(B_nl)):
s = (B_nl[i])/a
s_nl.append(s)
D_nl= [] # lista que contiene los valores de la constante de normalizacion
for i in range(len(s_nl)):
i_t = integrate.quad(lambda r: sp.jv(l, s_nl[i]*r)**2*r**2, 0, np.inf)
D = 1/i_t[0]
D_nl.append(D)
#CALCULO DE LOS VALORES QUE TOMA LA FUNCION RADIAL APARTIR DE LOS PARAMETROS ANTERIORES
#------------------------------------------------------------------------------------------------------------
R_nl = [] # lista de las funciones radiales
for i in range(len(s_nl)):
R = D_nl[i]*sp.jv(l, s_nl[i]*r)
R_nl.append(R)
return(R_nl,E_nl)
# -
def Grafica_fun_Radial(a,n,l):
colores = ['black','dimgray','dimgrey','gray','grey','darkgray']
colores1 = ['red','darkred','maroon','firebrick','brown','indianred']
colores2 = ['midnightblue', 'navy', 'darkblue', 'mediumblue', 'blue', 'royalblue']
r = np.linspace(0,a,100)
R_nl,E_nl=FuncionRadial(a,n,l)
x=np.linspace(-a,a,100)
for i in range(0,len(R_nl)):
if l% 2 == 0:
plt.plot(r,(R_nl[i]/np.max(R_nl[i])+E_nl[i]), color = colores[i], label='R_' +str(i+1)+','+str(l))
plt.plot(-np.flip(r), (np.flip(R_nl[i]/np.max(R_nl[i]))+E_nl[i]), color = colores[i])
plt.plot(x,[E_nl[i] for j in range(0,len(x))], colores1[i],label='E_'+ str(i+1)+','+str(l))
else:
plt.plot(r,(R_nl[i]/np.max(R_nl[i])+E_nl[i]), colores[i],label='R_'+ str(i+1)+','+str(l))
plt.plot(-np.flip(r),(-np.flip(R_nl[i]/np.max(R_nl[i]))+E_nl[i]), color = colores[i])
plt.plot(x,[E_nl[i] for j in range(0,len(x))], colores1[i],label='E_'+ str(i+1)+','+str(l))
plt.legend()
plt.xlabel('Ancho del pozo')
plt.ylabel('Energia')
plt.show()
Grafica_fun_Radial(20,1,2)
# +
def FO_Elec_cascaron(a,n,l,m):
R_nl,E_nl = FuncionRadial(a,n,l)
#--------------------VALOR ABSOLUTO DE LA FUNCION DE ONDA AL CUADRADO PARA PARTICULA EN POTENCIAL ESFERICO---------------
PHI, THETA = np.mgrid[0:2*np.pi:200j, 0:np.pi:100j] #arrays de variables angulares
FUN_ONDA =np.abs((R_nl[n-1]/np.max(R_nl[n-1]))*sp.sph_harm(m, l, PHI, THETA))**2
#A continuación convertimos a coordenadas cartesianas
# para su representación 3D
X = FUN_ONDA * np.sin(THETA) * np.cos(PHI)
Y = FUN_ONDA * np.sin(THETA) * np.sin(PHI)
Z = FUN_ONDA * np.cos(THETA)
#-----------------------------GRAFICA VALOR ABSOLUTO DE LA FUNCION DE ONDA AL CUADRADO ----------------------------------
fig = plt.figure(figsize=(10,4))
fig = plt.figure(constrained_layout=True,figsize=(7,6))
spec2 = gridspec.GridSpec(ncols=2, nrows=2, figure=fig)
#===============
# PRIMER subplot
#===============
# set up the axes for the first plot
N = FUN_ONDA/FUN_ONDA.max()
ax = fig.add_subplot(spec2[0,0], projection='3d')
im = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=cm.hot_r(N),alpha=0.2)
m = cm.ScalarMappable(cmap=cm.hot_r)
m.set_array(FUN_ONDA)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
ax.set_title('Función de Onda $|\psi_{r, \Theta, \phi}|^{2}$')
#===============
# SEGUNDO subplot
#===============
# set up the axes for the second plot
ax = fig.add_subplot(spec2[0, 1], projection='3d')
ax.contourf(X,Y,Z, zdir='z', offset=0, cmap=cm.hot)
#ax.contour(X,Y,Z, zdir='z', offset=0, cmap=cm.hot,linewidths=3)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
m = cm.ScalarMappable(cmap=cm.hot)
ax.set_title('Probabilidad $|\psi|^{2}$ en xy')
#===============
# TERCER subplot
#===============
ax = fig.add_subplot(spec2[1, 0], projection='3d')
ax.contourf(X, Y, Z, zdir='y', offset=0, cmap=cm.hot)
#ax.contour(X, Y, Z, zdir='y', offset=0, cmap=cm.hot,linewidths=3)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
ax.set_title('Probabilidad $|\psi|^{2}$ en z')
#===============
# CUARTO subplot
#===============
ax = fig.add_subplot(spec2[1, 1], projection='3d')
ax.contourf(X, Y, Z, zdir='x', offset=0, cmap=cm.hot)
#ax.contour(X, Y, Z, zdir='x', offset=0, cmap=cm.hot,linewidths=3)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
ax.set_title('Probabilidad $|\psi|^{2}$ en zx')
fig.colorbar(m, shrink=0.8);
# -
FO_Elec_cascaron(20,2,0,0)
def PROB_FUN_OND_ELEC_CAS(a,n,l,m):
# l Grado del armónico esférico
# m Orden del grado esferico
# n Estado
R_nl,E_nl = FuncionRadial(a,n,l)
# DENSIDAD DE PROBABILIDAD DE LA FUNCION DE ONDA PSI
fig = plt.figure(figsize=(5,4))
PHI, THETA = np.mgrid[0:2*np.pi:100j, 0:np.pi:100j] #arrays de variables angulares
Y = np.abs(sp.sph_harm(m, l, PHI, THETA))**2 #Array de valores absolutos de Ymn
YX = Y * np.sin(THETA) * np.cos(PHI)
YY = Y * np.sin(THETA) * np.sin(PHI)
YZ = Y * np.cos(THETA)
r = np.linspace(0,a,100)
for i in range(0,len(r)):
plt.plot(r[i] * YY[1], r[i] * YZ[1], 'k', color = 'r', alpha = (1/(max(R_nl[n-1]))) * abs(R_nl[n-1][i]))
plt.plot(-r[i] * YY[1], r[i] * YZ[1], 'k',color = 'r', alpha = (1/(max(R_nl[n-1]))) * abs(R_nl[n-1][i]))
PROB_FUN_OND_ELEC_CAS(20,2,0,0)
# +
def FO_ATO_HIDROGENO(a,n,l,m):
# l < n
#---------------------------------------FUNCION RADIAL ATOMO DE HIDROGENO-----------------------------------------
a0 = 1
r = np.linspace(0,a,100)
rho = rho = (2 * r) / (n * a0) #
N = np.sqrt((np.math.factorial(n-l-1)/(2* n* np.math.factorial(n+l))) * (2/(n *a0)) ** 3)
R = N * sp.assoc_laguerre(rho,n-l-1,2*l+1) * (rho ** l) * np.exp(- (rho / 2))
#---------------------------------------FUNCION DE ONDA PARA EL ATOMO DE HIDROGENO-------------------------------------
#THETA = np.linspace(0,np.pi,100)
#PHI = np.linspace(0,2*np.pi, 100)
PHI, THETA = np.mgrid[0:2*np.pi:100j, 0:np.pi:100j] #arrays de variables angulares
FUN_ONDA =np.abs((R/np.max(R))*sp.sph_harm(m, l, PHI, THETA))**2
#A continuación convertimos a coordenadas cartesianas
# para su representación 3D
X = FUN_ONDA * np.sin(THETA) * np.cos(PHI)
Y = FUN_ONDA * np.sin(THETA) * np.sin(PHI)
Z = FUN_ONDA * np.cos(THETA)
#----------------------------------------GRAFICO DE LA FUNCION DE ONDA -------------------------------------------------
fig = plt.figure(figsize=(10,4))
fig = plt.figure(constrained_layout=True,figsize=(7,6))
spec2 = gridspec.GridSpec(ncols=2, nrows=2, figure=fig)
#===============
# PRIMER subplot
#===============
# set up the axes for the first plot
N = FUN_ONDA/np.max(FUN_ONDA)
ax = fig.add_subplot(spec2[0,0], projection='3d')
im = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, facecolors=cm.hot(N),alpha=0.2)
im = ax.plot_surface(X, Y, -Z, rstride=1, cstride=1, facecolors=cm.hot(N),alpha=0.2)
m = cm.ScalarMappable(cmap=cm.hot)
m.set_array(FUN_ONDA)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
ax.set_title('Función de Onda $|\psi_{r, \Theta, \phi}|^{2}$')
#===============
# SEGUNDO subplot
#===============
# set up the axes for the second plot
ax = fig.add_subplot(spec2[0, 1], projection='3d')
ax.contourf(X,Y,Z, zdir='z', offset=0, cmap=cm.hot)
#ax.contour(X,Y,Z, zdir='z', offset=0, cmap=cm.hot,linewidths=1)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
m = cm.ScalarMappable(cmap=cm.hot)
ax.set_title('Función de Onda $|\psi|^{2}$ en xy')
#===============
# TERCER subplot
#===============
ax = fig.add_subplot(spec2[1, 0], projection='3d')
ax.contourf(X, Y, Z, zdir='y', offset=0, cmap=cm.hot)
ax.contourf(X, Y, -Z, zdir='y', offset=0, cmap=cm.hot)
#ax.contour(X, Y, Z, zdir='y', offset=0, cmap=cm.hot,linewidths=4)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
ax.set_title('Función de Onda $|\psi|^{2}$ en zy')
#===============
# CUARTO subplot
#===============
ax = fig.add_subplot(spec2[1, 1], projection='3d')
ax.contourf(X, Y, Z, zdir='x', offset=0, cmap=cm.hot)
ax.contourf(X, Y, -Z, zdir='x', offset=0, cmap=cm.hot)
#ax.contour(X, Y, Z, zdir='x', offset=0, cmap=cm.hot,linewidths=1)
ax.set_xlabel('X', fontsize = 8)
ax.set_ylabel('Y', fontsize = 8)
ax.set_zlabel('Z', fontsize = 8)
ax.set_title('Función de Onda $|\psi|^{2}$ en zx')
fig.colorbar(m, shrink=0.8);
# -
FO_ATO_HIDROGENO(12,1,0,0)
# +
def PROB_FUN_OND_Ato_HID(a,n,l,m):
# l Grado del armónico esférico
# m Orden del grado esferico
# n Estado
#---------------------------------------FUNCION RADIAL ATOMO DE HIDROGENO-----------------------------------------
a0 = 1
r = np.linspace(0,a,100)
rho = rho = (2 * r) / (n * a0) #
N = np.sqrt((np.math.factorial(n-l-1)/(2* n* np.math.factorial(n+l))) * (2/(n*a0)) * 3)
R = N * sp.assoc_laguerre(rho,n-l-1,2*l+1) * (rho ** l) * np.exp(- (rho / 2))
# DENSIDAD DE PROBABILIDAD DE LA FUNCION DE ONDA PHI PARA ATOMO DE HIDROGENO
#--------------------------------------------------------------------------------------------------------------------------
fig = plt.figure(figsize=(5,4))
PHI, THETA = np.mgrid[0:2*np.pi:100j, 0:np.pi:100j] #arrays de variables angulares
Y = np.abs(sp.sph_harm(m, l, PHI, THETA))**2 #Array de valores absolutos de Ymn
YX = Y * np.sin(THETA) * np.cos(PHI)
YY = Y * np.sin(THETA) * np.sin(PHI)
YZ = Y * np.cos(THETA)
for i in range(0,len(r)):
plt.plot(r[i] * YY[1], r[i] * YZ[1], 'k', color = 'r', alpha = (1/(max(R))) * abs(R[i]))
plt.plot(-r[i] * YY[1], r[i] * YZ[1], 'k',color = 'r', alpha =(1/(max(R))) * abs(R[i]))
# -
PROB_FUN_OND_Ato_HID(60,4,1,1)
|
Radial_Particula.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + tags=["context"] deletable=false dc={"key": "4"} run_control={"frozen": true} editable=false
# ## 1. Tweet classification: Trump vs. Trudeau
# <p>So you think you can classify text? How about tweets? In this notebook, we'll take a dive into the world of social media text classification by investigating how to properly classify tweets from two prominent North American politicians: <NAME> and <NAME>.</p>
# <p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/47/President_Donald_Trump_and_Prime_Minister_Justin_Trudeau_Joint_Press_Conference%2C_February_13%2C_2017.jpg/800px-President_Donald_Trump_and_Prime_Minister_Justin_Trudeau_Joint_Press_Conference%2C_February_13%2C_2017.jpg" alt="<NAME> and <NAME> shaking hands." height="50%" width="50%"></p>
# <p><a href="https://commons.wikimedia.org/wiki/File:President_Donald_Trump_and_Prime_Minister_Justin_Trudeau_Joint_Press_Conference,_February_13,_2017.jpg">Photo Credit: Executive Office of the President of the United States</a></p>
# <p>Tweets pose specific problems to NLP, including the fact they are shorter texts. There are also plenty of platform-specific conventions to give you hassles: mentions, #hashtags, emoji, links and short-hand phrases (ikr?). Can we overcome those challenges and build a useful classifier for these two tweeters? Yes! Let's get started.</p>
# <p>To begin, we will import all the tools we need from scikit-learn. We will need to properly vectorize our data (<code>CountVectorizer</code> and <code>TfidfVectorizer</code>). And we will also want to import some models, including <code>MultinomialNB</code> from the <code>naive_bayes</code> module, <code>LinearSVC</code> from the <code>svm</code> module and <code>PassiveAggressiveClassifier</code> from the <code>linear_model</code> module. Finally, we'll need <code>sklearn.metrics</code> and <code>train_test_split</code> and <code>GridSearchCV</code> from the <code>model_selection</code> module to evaluate and optimize our model.</p>
# + tags=["sample_code"] dc={"key": "4"}
# Set seed for reproducibility
import random; random.seed(53)
# Import all we need from sklearn
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from sklearn import metrics
# + tags=["context"] deletable=false dc={"key": "11"} run_control={"frozen": true} editable=false
# ## 2. Transforming our collected data
# <p>To begin, let's start with a corpus of tweets which were collected in November 2017. They are available in CSV format. We'll use a Pandas DataFrame to help import the data and pass it to scikit-learn for further processing.</p>
# <p>Since the data has been collected via the Twitter API and not split into test and training sets, we'll need to do this. Let's use <code>train_test_split()</code> with <code>random_state=53</code> and a test size of 0.33, just as we did in the DataCamp course. This will ensure we have enough test data and we'll get the same results no matter where or when we run this code.</p>
# + tags=["sample_code"] dc={"key": "11"}
import pandas as pd
# Load data
tweet_df = pd.read_csv('datasets/tweets.csv')
# Create target
y = tweet_df['author']
# Split training and testing data
X_train, X_test, y_train, y_test = train_test_split(tweet_df['status'], y, random_state=53, test_size=.33)
# + tags=["context"] deletable=false dc={"key": "18"} run_control={"frozen": true} editable=false
# ## 3. Vectorize the tweets
# <p>We have the training and testing data all set up, but we need to create vectorized representations of the tweets in order to apply machine learning.</p>
# <p>To do so, we will utilize the <code>CountVectorizer</code> and <code>TfidfVectorizer</code> classes which we will first need to fit to the data.</p>
# <p>Once this is complete, we can start modeling with the new vectorized tweets!</p>
# + tags=["sample_code"] dc={"key": "18"}
# Initialize count vectorizer
count_vectorizer = CountVectorizer(stop_words='english', max_df=0.9, min_df=0.05)
# Create count train and test variables
count_train = count_vectorizer.fit_transform(X_train)
count_test = count_vectorizer.transform(X_test)
# Initialize tfidf vectorizer
tfidf_vectorizer = TfidfVectorizer(stop_words='english', max_df=0.9, min_df=0.05)
# Create tfidf train and test variables
tfidf_train = tfidf_vectorizer.fit_transform(X_train)
tfidf_test = tfidf_vectorizer.transform(X_test)
# + tags=["context"] deletable=false dc={"key": "25"} run_control={"frozen": true} editable=false
# ## 4. Training a multinomial naive Bayes model
# <p>Now that we have the data in vectorized form, we can train the first model. Investigate using the Multinomial Naive Bayes model with both the <code>CountVectorizer</code> and <code>TfidfVectorizer</code> data. Which do will perform better? How come?</p>
# <p>To assess the accuracies, we will print the test sets accuracy scores for both models.</p>
# + tags=["sample_code"] dc={"key": "25"}
# Create a MulitnomialNB model
tfidf_nb = MultinomialNB()
tfidf_nb.fit(tfidf_train, y_train)
# Run predict on your TF-IDF test data to get your predictions
tfidf_nb_pred = tfidf_nb.predict(tfidf_test)
# Calculate the accuracy of your predictions
tfidf_nb_score = metrics.accuracy_score(tfidf_nb_pred, y_test)
# Create a MulitnomialNB model
count_nb = MultinomialNB()
count_nb.fit(count_train, y_train)
# Run predict on your count test data to get your predictions
count_nb_pred = count_nb.predict(count_test)
# Calculate the accuracy of your predictions
count_nb_score = metrics.accuracy_score(count_nb_pred, y_test)
print('NaiveBayes Tfidf Score: ', tfidf_nb_score)
print('NaiveBayes Count Score: ', count_nb_score)
# + tags=["context"] deletable=false dc={"key": "32"} run_control={"frozen": true} editable=false
# ## 5. Evaluating our model using a confusion matrix
# <p>We see that the TF-IDF model performs better than the count-based approach. Based on what we know from the NLP fundamentals course, why might that be? We know that TF-IDF allows unique tokens to have a greater weight - perhaps tweeters are using specific important words that identify them! Let's continue the investigation.</p>
# <p>For classification tasks, an accuracy score doesn't tell the whole picture. A better evaluation can be made if we look at the confusion matrix, which shows the number correct and incorrect classifications based on each class. We can use the metrics, True Positives, False Positives, False Negatives, and True Negatives, to determine how well the model performed on a given class. How many times was Trump misclassified as Trudeau?</p>
# + tags=["sample_code"] dc={"key": "32"}
# %matplotlib inline
from datasets.helper_functions import plot_confusion_matrix
# Calculate the confusion matrices for the tfidf_nb model and count_nb models
tfidf_nb_cm = metrics.confusion_matrix(y_test, tfidf_nb_pred, labels=['<NAME>', '<NAME>'])
count_nb_cm = metrics.confusion_matrix(y_test, count_nb_pred, labels=['<NAME>', '<NAME>'])
# Plot the tfidf_nb_cm confusion matrix
plot_confusion_matrix(tfidf_nb_cm, classes=['<NAME>', '<NAME>'], title="TF-IDF NB Confusion Matrix")
# Plot the count_nb_cm confusion matrix without overwriting the first plot
plot_confusion_matrix(count_nb_cm, classes=['<NAME>', '<NAME>'], title="Count NB Confusion Matrix", figure=1)
# + tags=["context"] deletable=false dc={"key": "39"} run_control={"frozen": true} editable=false
# ## 6. Trying out another classifier: Linear SVC
# <p>So the Bayesian model only has one prediction difference between the TF-IDF and count vectorizers -- fairly impressive! Interestingly, there is some confusion when the predicted label is Trump but the actual tweeter is Trudeau. If we were going to use this model, we would want to investigate what tokens are causing the confusion in order to improve the model. </p>
# <p>Now that we've seen what the Bayesian model can do, how about trying a different approach? <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html">LinearSVC</a> is another popular choice for text classification. Let's see if using it with the TF-IDF vectors improves the accuracy of the classifier!</p>
# + tags=["sample_code"] dc={"key": "39"}
# Create a LinearSVM model
tfidf_svc = LinearSVC()
tfidf_svc.fit(tfidf_train, y_train)
# Run predict on your tfidf test data to get your predictions
tfidf_svc_pred = tfidf_svc.predict(tfidf_test)
# Calculate your accuracy using the metrics module
tfidf_svc_score = metrics.accuracy_score(tfidf_svc_pred, y_test)
print("LinearSVC Score: %0.3f" % tfidf_svc_score)
# Calculate the confusion matrices for the tfidf_svc model
svc_cm = metrics.confusion_matrix(y_test, tfidf_svc_pred, labels=['<NAME>', '<NAME>'])
# Plot the confusion matrix using the plot_confusion_matrix function
plot_confusion_matrix(svc_cm, classes=['<NAME>', '<NAME>'], title="TF-IDF LinearSVC Confusion Matrix")
# + tags=["context"] deletable=false dc={"key": "46"} run_control={"frozen": true} editable=false
# ## 7. Introspecting our top model
# <p>Wow, the LinearSVC model is even better than the Multinomial Bayesian one. Nice work! Via the confusion matrix we can see that, although there is still some confusion where Trudeau's tweets are classified as Trump's, the False Positive rate is better than the previous model. So, we have a performant model, right? </p>
# <p>We might be able to continue tweaking and improving all of the previous models by learning more about parameter optimization or applying some better preprocessing of the tweets. </p>
# <p>Now let's see what the model has learned. Using the LinearSVC Classifier with two classes (Trump and Trudeau) we can sort the features (tokens), by their weight and see the most important tokens for both Trump and Trudeau. What are the most Trump-like or Trudeau-like words? Did the model learn something useful to distinguish between these two men? </p>
# + tags=["sample_code"] dc={"key": "46"}
from datasets.helper_functions import plot_and_return_top_features
# Import pprint from pprint
from pprint import pprint
# Get the top features using the plot_and_return_top_features function and your top model and tfidf vectorizer
top_features = plot_and_return_top_features(tfidf_svc, tfidf_vectorizer)
# pprint the top features
pprint(top_features)
# + tags=["context"] deletable=false dc={"key": "53"} run_control={"frozen": true} editable=false
# ## 8. Bonus: can you write a Trump or Trudeau tweet?
# <p>So, what did our model learn? It seems like it learned that Trudeau tweets in French!</p>
# <p>I challenge you to write your own tweet using the knowledge gained to trick the model! Use the printed list or plot above to make some inferences about what words will classify your text as Trump or Trudeau. Can you fool the model into thinking you are Trump or Trudeau?</p>
# <p>If you can write French, feel free to make your Trudeau-impersonation tweet in French! As you may have noticed, these French words are common words, or, "stop words". You could remove both English and French stop words from the tweets as a preprocessing step, but that might decrease the accuracy of the model because Trudeau is the only French-speaker in the group. If you had a dataset with more than one French speaker, this would be a useful preprocessing step.</p>
# <p>Future work on this dataset could involve:</p>
# <ul>
# <li>Add extra preprocessing (such as removing URLs or French stop words) and see the effects</li>
# <li>Use GridSearchCV to improve both your Bayesian and LinearSVC models by finding the optimal parameters</li>
# <li>Introspect your Bayesian model to determine what words are more Trump- or Trudeau- like</li>
# <li>Add more recent tweets to your dataset using tweepy and retrain</li>
# </ul>
# <p>Good luck writing your impersonation tweets -- feel free to share them on Twitter!</p>
# + tags=["sample_code"] dc={"key": "53"}
# Write two tweets as strings, one which you want to classify as Trump and one as Trudeau
trump_tweet = 'fake news'
trudeau_tweet = 'canada'
# Vectorize each tweet using the TF-IDF vectorizer's transform method
trump_tweet_vectorized = tfidf_vectorizer.transform([trump_tweet])
trudeau_tweet_vectorized = tfidf_vectorizer.transform([trudeau_tweet])
# Call the predict method on your vectorized tweets
trump_tweet_pred = tfidf_svc.predict(trump_tweet_vectorized)
trudeau_tweet_pred = tfidf_svc.predict(trudeau_tweet_vectorized)
print("Predicted Trump tweet", trump_tweet_pred)
print("Predicted Trudeau tweet", trudeau_tweet_pred)
|
Who's Tweeting_ Trump or Trudeau_/notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + tags=[]
import sys
sys.path.append('C:/maldb/python/mlp/')
import tensorflow as tf
from tensorflow import keras
import numpy as np
np.set_printoptions(threshold=np.inf)
import matplotlib.pyplot as plt
from ionizer import *
# + tags=[]
#When we shuffle our training and test arrays, we need to shuffle them in unison so that the indices in the labels array line up with their corresponding peptide in the training array.
def unison_shuffled_copies(a, b):
assert len(a) == len(b)
p = np.random.permutation(len(a))
return a[p], b[p]
#To avoid feeding wildly varying numbers into the NN, we normalize things by subtracting the mean from each feature value and dividing by the standard deviation
def normalize_data(data):
mean = data.mean(axis=0)
data -= mean
std = data.std(axis=0)
data /= std
return data
# + tags=[]
from keras import models
from keras import layers
from keras import regularizers
#Lets make our dense model
def build_model():
model = models.Sequential()
model.add(layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001), input_shape=(feature_vector_length,)))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dropout(0.2))
model.add(layers.Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(
optimizer='rmsprop',
loss='binary_crossentropy',
metrics=[
keras.metrics.BinaryAccuracy(name="binary_accuracy", dtype=None, threshold=0.5),
tf.keras.metrics.Precision(),
tf.keras.metrics.Recall()
]
)
return model
# + tags=[]
#Get the data from the excel sheet, calculate feature vectors for each entry (row) using ionizers.py
train_data, train_seqs, train_labels = get_ionizer_training_data('C:/maldb/python/mlp/data/ionizers.csv')
#Let's vectorize our sequences
x_train = normalize_data(train_data).astype('float32')
y_train = train_labels.astype('float32')
x_train, y_train = unison_shuffled_copies(x_train, y_train)
#Cut features
x_train = x_train[:, :50]
#Make a test set
test_set_size = 50
x_test = x_train[:test_set_size]
y_test = y_train[:test_set_size]
x_train = x_train[test_set_size:]
y_train = y_train[test_set_size:]
#For the input layer
feature_vector_length = x_train.shape[1]
# + tags=[]
#Define the number of folds... this will give us an 80/20 split
k = 5
epochs = 120
num_val_samples = len(x_train) // k
scores_binacc = []
scores_precision = []
scores_recall = []
#Train the dense model in k iterations
for i in range(k):
print('Processing fold #', i)
val_data = x_train[i * num_val_samples : (i + 1) * num_val_samples]
val_targets = y_train[i * num_val_samples : (i + 1) * num_val_samples]
print('Validation partition = ', i * num_val_samples, (i + 1) * num_val_samples)
print('Training partition 1 = ', 0, i * num_val_samples)
print('Training partition 2 = ', (i+1) * num_val_samples, len(x_train))
partial_train_data = np.concatenate(
[
x_train[:i * num_val_samples],
x_train[(i+1) * num_val_samples:]
],
axis=0
)
partial_train_targets = np.concatenate(
[
y_train[:i * num_val_samples],
y_train[(i+1) * num_val_samples:]
],
axis=0
)
model = build_model()
model.fit(
partial_train_data,
partial_train_targets,
epochs=epochs,
verbose=0
)
val_loss, val_binacc, val_precision, val_recall = model.evaluate(val_data, val_targets, verbose=0)
scores_binacc.append(val_binacc)
scores_precision.append(val_precision)
scores_recall.append(val_recall)
# -
print('Mean Validation Binary Accuracy: ', np.mean(scores_binacc))
print('Mean Validation Precision: ', np.mean(scores_precision))
print('Mean Validation Recall: ', np.mean(scores_recall))
# +
x_test_predictions = model.predict(np.array(x_test))
b = keras.metrics.BinaryAccuracy()
p = keras.metrics.Precision()
r = keras.metrics.Recall()
print(y_test)
b.update_state(
[y_test],
[x_test_predictions]
)
p.update_state(
[y_test],
[x_test_predictions]
)
r.update_state(
[y_test],
[x_test_predictions]
)
print('Test Binary Accuracy: ', b.result().numpy())
print('Test Precision: ', p.result().numpy())
print('Test Recall: ', r.result().numpy())
# + jupyter={"outputs_hidden": true}
history_dict = history.history
print(history_dict.keys())
e = range(1, epochs + 1)
loss = history_dict['loss']
val_loss = history_dict['val_loss']
plt.plot(e, loss, 'r-', label='Training loss')
plt.plot(e, val_loss, 'g-', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# +
#We should randomly shuffle our data before splitting it into training and test sets
#The peptides were deposited into the array on a per-protein basis, so each protein may have a specific AA composition, properties etc.
#Thus, if the first 85% of our data were soluble proteins, and the last 15% were membrane proteins, we are likely to have poor prediction since our model never saw/had a chance to learn on membrane peptides!
#Shuffle!
x_train, y_train = unison_shuffled_copies(x_train, y_train)
# +
#Slice the training and validation sets
partial_x_train = x_train[:split]
x_val = x_train[split:]
partial_y_train = y_train[:split]
y_val = y_train[split:]
# + jupyter={"outputs_hidden": true}
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
plt.plot(e, loss_values, 'r-', label='Training Loss')
plt.plot(e, val_loss_values, 'g-', label='Validation Loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
# + jupyter={"outputs_hidden": true}
acc_values = history_dict['binary_accuracy']
val_acc_values = history_dict['val_binary_accuracy']
plt.plot(e, acc_values, 'r-', label='Training binary accuracy')
plt.plot(e, val_acc_values, 'g-', label='Validation binary accuracy')
plt.title('Training and validation binary accuracy')
plt.xlabel('Epochs')
plt.ylabel('Binary Accuracy')
plt.legend()
plt.show()
# + jupyter={"outputs_hidden": true}
acc_values = history_dict['precision_10']
val_acc_values = history_dict['val_precision_10']
plt.plot(e, acc_values, 'r-', label='Training precision')
plt.plot(e, val_acc_values, 'g-', label='Validation precision')
plt.title('Training and validation binary precision')
plt.xlabel('Epochs')
plt.ylabel('precision')
plt.legend()
plt.show()
# -
plt.hist(model.predict(x_train), bins=50)
|
python/mlp/maldb_nn_convolutional.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''base'': conda)'
# name: python376jvsc74a57bd0ec4f1c395c9060a2f4933d56a58278157b2fbfb1d142affcb202d55b2721a26e
# ---
# +
def gen_basico():
yield "uno"
yield "dos"
yield "tres"
for valor in gen_basico():
print(valor) # uno, dos, tres
# +
def gen_diez_numeros(inicio):
fin = inicio + 10
while inicio < fin:
inicio+=1
yield inicio, fin
for inicio, fin in gen_diez_numeros(23):
print(inicio, fin, end=" - ")
# +
# own generator
def create_cubes(n):
return [num**3 for num in range(0,n)]
# +
create_cubes(10)
# -
# usando generators
def create_cubes_2(n):
for num in range(0,n):
yield num**3
# +
# un generador es una función que devuelve un iterador sobre el q podemos iterar y nos devuelve un valor cada vez
x = create_cubes_2(10)
print(x.__next__())
print(x.__next__())
# +
# fibbonaci
def gen_fib(n):
a= 1
b= 1
for i in range(n):
yield a
a,b = b,b+a
# -
for num in gen_fib(10):
print(num)
def simple_gen():
for i in range(3):
yield i
g = simple_gen()
print(next(g))
print(next(g))
# +
mytuple = ("apple", "banana", "cherry")
next(mytuple)
# +
# ITERADORES
lista = [1,2,3,4,5,6]
iterador_de_la_lista = iter(lista)
try:
while True:
print(iterador_de_la_lista.__next__())
except StopIteration:
print('fin de la iteración')
# +
# Implementar el método iter() / next() en nuestra propia clase:
class PrintNumber:
def __init__(self, max):
self.max = max
def __iter__(self):
self.num = 0
return self
def __next__(self):
if(self.num>=self.max):
raise StopIteration
self.num +=1
return self.num
imprimirNum = PrintNumber(6)
for num in imprimirNum:
print(num, end=' ')
imprimirNum_to_iterador = iter(imprimirNum)
print('\n')
print(imprimirNum_to_iterador.__next__())
|
Code/9.generadores/basic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **This notebook is an exercise in the [Introduction to Machine Learning](https://www.kaggle.com/learn/intro-to-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/dansbecker/random-forests).**
#
# ---
#
# ## Recap
# Here's the code you've written so far.
# +
# Code you have previously used to load data
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)
# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE when not specifying max_leaf_nodes: {:,.0f}".format(val_mae))
# Using best value for max_leaf_nodes
iowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)
iowa_model.fit(train_X, train_y)
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE for best value of max_leaf_nodes: {:,.0f}".format(val_mae))
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex6 import *
print("\nSetup complete")
# -
# # Exercises
# Data science isn't always this easy. But replacing the decision tree with a Random Forest is going to be an easy win.
# ## Step 1: Use a Random Forest
# +
from sklearn.ensemble import RandomForestRegressor
# Define the model. Set random_state to 1
rf_model = RandomForestRegressor(random_state=1)
# fit your model
rf_model.fit(train_X, train_y)
# Calculate the mean absolute error of your Random Forest model on the validation data
rf_val_mae = mean_absolute_error(rf_model.predict(val_X),val_y)
print("Validation MAE for Random Forest Model: {}".format(rf_val_mae))
# Check your answer
step_1.check()
# -
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution()
# So far, you have followed specific instructions at each step of your project. This helped learn key ideas and build your first model, but now you know enough to try things on your own.
#
# Machine Learning competitions are a great way to try your own ideas and learn more as you independently navigate a machine learning project.
#
# # Keep Going
#
# You are ready for **[Machine Learning Competitions](https://www.kaggle.com/dansbecker/machine-learning-competitions).**
#
# ---
#
#
#
#
# *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161285) to chat with other Learners.*
|
Intro to Machine Learning/6 Random Forests/exercise-random-forests.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (mort)
# language: python
# name: mort
# ---
# +
import numpy as np
import nibabel as nib
import h5py
import pandas as pd
from scipy.ndimage.interpolation import zoom
import sys
sys.path.append("../")
from config import doc_dir, indices_holdout
# -
doc_dir = "/analysis/fabiane/data/MS/explMS/file_list_HC_MS_BET_FLAIR.csv"
output_shape = (96, 114, 96)
z_factor=0.525
df = pd.read_csv(doc_dir)
df.head()
# +
# split datasets
holdout_df = df.iloc[indices_holdout]
train_df = df.drop(indices_holdout)
holdout_df.reset_index(inplace=True)
holdout_df = holdout_df.drop("index", axis="columns")
train_df.reset_index(inplace=True)
train_df = train_df.drop("index", axis="columns")
# -
holdout_df.head()
train_df.head()
print(len(train_df))
print(len(holdout_df))
(len(train_df), ) + output_shape
output_shape = (182, 218, 182)
# load images in matrix
def create_dataset(dataset, z_factor, output_shape):
data_matrix = np.empty(shape=((len(dataset),) + output_shape))
labels = np.empty(shape=((len(dataset),)))
for idx, row in dataset.iterrows():
path = row["path"]
# switch to mprage
path = path.replace("FLAIR", "MPRAGE")
path = path.replace("/Ritter/MS", "/Ritter/Dataset/MS")
path = path.replace(".nii.gz", ".nii")
GM_path = path.replace("BET_", "c1")
WM_path = path.replace("BET_", "c2")
CSF_path = path.replace("BET_", "c3")
# in case of no lesion mask fill with random values
try:
GM = nib.load(GM_path).get_data().astype(np.float32)
WM = nib.load(WM_path).get_data().astype(np.float32)
CSF = nib.load(CSF_path).get_data().astype(np.float32)
mask = np.zeros_like(GM)
mask[np.where(np.logical_or(np.greater(WM, CSF), np.greater(GM, CSF)))] = 1
#struct_arr = zoom(mask, z_factor, order=0) #order 0 = NN interpolation
struct_arr = mask
except(FileNotFoundError):
print("File not Found: {}".format(path))
struct_arr = np.random.rand(output_shape[0], output_shape[1], output_shape[2])
data_matrix[idx] = struct_arr
labels[idx] = (row["label"] == "MS") *1
return data_matrix, labels
train_dataset, train_labels = create_dataset(train_df, z_factor=z_factor, output_shape=output_shape)
holdout_dataset, holdout_labels = create_dataset(holdout_df, z_factor=z_factor, output_shape=output_shape)
print(train_dataset.shape)
print(holdout_dataset.shape)
import matplotlib.pyplot as plt
train_df.iloc[-1]
train_dataset[-1].max()
plt.imshow(train_dataset[-2][:,:,48], cmap='gray')
plt.show()
plt.imshow(holdout_dataset[-1][:,:,48], cmap='gray')
plt.show()
h5 = h5py.File('/data/Ritter/MS/CIS/train_dataset_brain_masks.h5', 'w')
h5.create_dataset('masks', data=train_dataset)
h5.close()
h5 = h5py.File('/data/Ritter/MS/CIS/holdout_dataset_brain_masks.h5', 'w')
h5.create_dataset('masks', data=holdout_dataset)
h5.close()
|
create_hdf5_files/Make MS HDF5 set-brain masks.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizing networks via igraph
#
# You've created a few networks, detected communities, and done some other basic analysis -- time to visualize the network and check if there are some discernable patterns!
#
# There are 2 ways that *networks/graphs* can be visualized:
# * use your python library of choise to export a **gml** file (or other formats), which you can open using a visualizing app of your choice ([**gephi**](https://gephi.org/) is one of the most popular among researchers); or
# * continue using python to create visuals using code!
#
# What's the difference? Well for one, the size of your network. If you are talking about a few thousand (or dozen thousand) nodes, and a few hundred thousand edges, apps work great! If you are talking about millions of data points though, these apps start to struggle. **Gephi** for instance recommends that you use their app for networks of [up to 100,000 nodes and 1M edges](https://gephi.org/features/) (although I've used it with graphs several times that recommendations, and the result was okay, it just took a long time)!
#
# In this tutorial, I will focus on how to use the python **igraph** library to create visualizations via code!
#
# **NOTE:** for this tutorial and for things to work, you need to have [**cairo**](https://www.cairographics.org/) installed along with **igraph**.
# ## 1. Example with random graph
# Before using real netorks assembled from Twitter data, lets auto generate a random graph to determine how visualizations work in theory:
#
# The **igraph** intro gives us one way to do this -- using random geometric graphs:
# > [**Graph.GRG()**](http://igraph.org/python/doc/tutorial/tutorial.html#generating-graphs) generates a geometric random graph: *n* points are chosen randomly and uniformly inside the unit square and pairs of points closer to each other than a predefined distance *d* are connected by an edge.
#
# As random regometric graphs *usually* represent behaviour typical of real life social networks, unlike Erdos-Renyi, they are quite appropriate to experiment with.
# +
from igraph import *
from datetime import datetime
g = Graph.GRG(30, 0.5)
g.summary()
# -
# ### Choosing the layout
# Now that we have a random graph, lets plot it! First we set up a [**layout**](http://igraph.org/python/doc/tutorial/tutorial.html#layout-algorithms) that we want to use, and then [**plot**](http://igraph.org/python/doc/tutorial/tutorial.html#drawing-a-graph-using-a-layout) the graph according to the layout we chose!
start = datetime.now()
layout = g.layout_fruchterman_reingold() #choose layout
plt = plot(g, layout=layout, bbox = (500, 300), margin = 20) #plot the graph with the layout
#bbox specifies size of the image (horiontal, vertical), and margin specifies space around the image
print("It took {} to generate the layout:".format(datetime.now() - start)) #we will time the computer to see how long it takes
plt
# There are a few more [layouts to choose from](http://igraph.org/python/doc/tutorial/tutorial.html#layout-algorithms), for example a simple circle
l = g.layout_circle()
p = plot(g, layout=l, bbox = (300, 300), margin=20)
p
# You will notice that if you try any of the 3D layouts with the above code, python will err out. Unfortunately 3D visuals of netwoks require another renderer, **igraph** will not render them itself. I will cover it separately in another tutorial (WIP).
#
# So lets get to the next step, how to modify this rendering to add colors, spacing, node sizes, and all that other fun stuff
#
# -------------
#
# Firstly, we need to add a few features to our graph so that we can work with it:
# * `'community'` attribute - create a radom assignment of 0 to 2
# * `'size'` attribute - calculate the indegree of the node and assign that as a property
# +
import random #use python's random number generator
#loop through all vertices
for vertex in g.vs:
vertex['community'] = random.randint(0,2) #assign random community from 0 to 2
vertex['size'] = vertex.indegree()
#lets see what node 0 is like as an example:
g.vs[0].attributes()
# -
# Great! Now we have 2 new attributes to work with.
#
# ### Adding color to nodes
#
# Lets add colors to the 3 communities we just created:
#first create a dictionary that we will use to assign colors by community
color_dict = {0: "#0ec4ff", 1: "pink", 2: "yellow"}
g.vs["color"] = [color_dict[community] for community in g.vs["community"]]
layout = g.layout_fruchterman_reingold()
plt = plot(g, layout=layout, bbox = (500, 300), margin = 20)
plt
# That's getting better! You will notice that **plot** detected the attribute `'size'` and `'color'` automatically, so all we have to do is assign the required properties to these attributes. Lets change up the edge colors a bit too in order to add to the visual representation of *communities*
#
# We do this by checking the nodes connected by the edge -- if both are in the same community, then the color is of that communit. If they below the different communities, then the edge color is dictated by the node with the higher indegree. Hence:
# * use the `g.es[].tuple` property that gives the source and target node id as a tuple
# * assign color via a logical test: if `g.es[].tuple[0] == g.es[].tuple[1] then 'same color' else check_degrees(g.es[].tuple)`
# +
for edge in g.es:
tup = edge.tuple
edge['curved'] = 0.2 #lets also curve the links so that it looks better
if len(tup) == 2:
if tup[0] == tup[1]:
edge['color'] = color_dict[tup[0]]
else:
if g.vs[tup[0]].indegree() > g.vs[tup[1]].indegree():
edge['color'] = color_dict[g.vs[tup[0]]['community']]
elif g.vs[tup[0]].indegree() < g.vs[tup[1]].indegree():
edge['color'] = color_dict[g.vs[tup[1]]['community']]
else:
edge['color'] = 'black'
#great, lets try to see if that worked with one node:
g.es[0]
# -
# Okay, lets see what that looks like
start = datetime.now()
layout = g.layout_fruchterman_reingold()
plt = plot(g, layout=layout, bbox = (500, 300), margin = 20)
print("It took {} to generate the layout:".format(datetime.now() - start))
plt
# Looking better, lets apply this to a real life example
# ## Example: Dec 25, 2016 crash of Tu-154 with Alexandrov choir on board
#
# Lets try and appy the above setup to create a visualization of an actual network. An example sample is the crash of the Russian Tu-154 aircraft of December 25, 2016. The aircraft, which was carrying the Alexandrov Ensemble choir of the Russian Armed Forces as well as the well known humanitarian activist Elizaveta Glinka, [crashed into the Black sea, killing all on board](https://en.wikipedia.org/wiki/2016_Russian_Defence_Ministry_Tupolev_Tu-154_crash). This sample was collected using [Twitter's REST (Search) API](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html), with each node representing the Russian speaking accounts who discussed this crash -- in order to analyze the reaction in Russia to the event.
#use an actual specific graph -- small 4mb example, good size for experiments
path = r"...Tu154_crash_REST_graph.gml"
g = Graph.Read_GML(path)
g.summary()
# This example graph has 3,307 nodes and 83k edges, with a few attributes for each node -- such as `'im'` or the detected **infomap community**, as well as `'id'` or **twitter userid** of each user, and other attributes such as `'name'` and `'username'` (or **@handle**).
# Lets set up the colors as a separate function as it will be used throughout the script, and lets turn on all the bells and whistles we've used so far (including a few new ones):
# +
def get_color(community):
"""
get_color() is a simple function to return the color for a node or edge
"""
color_dict = {
0: "#df0eff", #pro-gov
1: "#0ec4ff", #opposition
2: "#4c463e", #centrists
3: "#73c400" #youths
}
return color_dict[community] if community in color_dict else "#aaa194"
def prepare_graph(g):
"""
prepare_graph() changes node and edge sizes and assigns color as chosen in the get_color() color dictionary
"""
nodes_to_delete = []
for vertex in g.vs:
if vertex.indegree() == 0:
nodes_to_delete.append(vertex.index)
else:
vertex['size'] = vertex.indegree()/100000
vertex['color'] = get_color(vertex['im'])
g.delete_vertices(nodes_to_delete)
for edge in g.es:
tup = edge.tuple
edge['size'] = 0.1
edge['arrow_size'] = None
edge['curved'] = 0.2
if len(tup) == 2:
if tup[0] == tup[1]:
edge['color'] = get_color(tup[0])
else:
if g.vs[tup[0]].indegree() > g.vs[tup[1]].indegree():
edge['color'] = get_color(g.vs[tup[0]]['im'])
elif g.vs[tup[0]].indegree() < g.vs[tup[1]].indegree():
edge['color'] = get_color(g.vs[tup[1]]['im'])
else:
edge['color'] = 'black'
return g
g = prepare_graph(g)
# -
# Okay, now that the properties required are set up, lets display it!
start = datetime.now()
layout = g.layout_fruchterman_reingold()
plt = plot(g, "Tu-154_crash_igraph.pdf", layout=layout, bbox = (15000, 9000), margin = 20)
print("It took {} to generate the layout:".format(datetime.now() - start))
# Above we generated the plot, assigned the value to the `plt` variable, and exported it as a pdf. Lets now generate to see what it looks like:
# 
# That doesn't look very nice... this does happen to be a small network. What happens if we visualize it with **gephi**?
# 
# So the very sad conclusion is that **gephi** works much better if you have a small network.
|
walkthroughs/Visualizing_netwroks_in_igraph.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# *Step 1: Import necessary packages*
# +
import os
from apiclient import discovery
from httplib2 import Http
import oauth2client
from oauth2client import file, client, tools
import pandas as pd
import nrrd
import nibabel as nib
import io
from googleapiclient.http import MediaIoBaseDownload
import cv2
import numpy as np
from skimage.io import imread
import matplotlib.pyplot as plt
#Importing files from my packages
import ifmodels.register as register
import ifmodels.gdaccess as gdaccess
import ifmodels.preprocess as preprocess
# -
# *Step 2: Importing the photo used for practice*
im = imread('/Volumes/imagereg/6-26-19-tiffexport/test3.tif')
# *Step 3: Getting the image and prepping it to be registered**
im_max = np.max(im, axis = 0)
blue = im_max[:,:,2]
blue.shape
# *Step 4: Getting the registration points for the moving image*
binary = register.mim_edge_detector(blue)
binary = register.image_cleaning(binary)
coor_df = register.find_points(binary)
# *Step 5: Visualizing the image registration points*
binaryx, binaryy = binary.shape
checkpoints = np.zeros([binaryx, binaryy, 3], dtype=np.uint8)
for row in coor_df.itertuples():
x = row.M_x
y = row.M_y
register.red_points(x,y,binary, checkpoints)
plt.imshow(binary, cmap='gray')
plt.imshow(checkpoints, alpha=0.6);
# *Step 6: Getting the appropriate atlas slice for the image*
df = pd.read_excel('distances.xlsx')
df.head()
# *Step 7: Getting the registration points for the atlas slice*
file = '/Users/HawleyHelm/Desktop/image-registration/Dawley-p14/NITRC-dti_rat_atlas-Downloads/atlas_segmentation.nrrd'
F_im_nii = register.nrrd_to_nii(file)
slice_number = df['Atlas Slice'][0]
sagittal, coronal, horizontal = register.atlas_slice(F_im_nii, slice_number)
resized = preprocess.resize(coronal, 20)
resized = register.mim_edge_detector(resized)
fim_coor_df = register.find_points(resized)
binaryx, binaryy = resized.shape
checkpoints = np.zeros([binaryx, binaryy, 3], dtype=np.uint8)
for row in fim_coor_df.itertuples():
x = row.M_x
y = row.M_y
register.red_points(x,y,resized, checkpoints)
plt.imshow(resized, cmap='gray')
plt.imshow(checkpoints, alpha=0.6);
# *Step 8: Registration*
#Change the fim_coor_df to have the proper labels
df = pd.concat([coor_df, fim_coor_df], axis = 1)
ainv = register.reg_coefficients(df)
registered_im = register.registration(blue, ainv)
# *Step 9: Saving registered image to a .nii file*
Registered_stack = np.zeros(resized.shape)
Registered_stack = np.vstack([Registered_stack[np.newaxis,:,:] , registered_im[np.newaxis,:,:]])
Registered_stack.shape
# *Step 10: Saving the file to my computer*
M_im_nii = nib.Nifti2Image(Registered_stack, affine=np.eye(4))
M_im_nii.get_data_dtype() == np.dtype(np.int16)
M_im_nii.header.get_xyzt_units()
nib.save(M_im_nii, os.path.join('/Users/HawleyHelm/Desktop/image-registration/Dawley-p14','fullbraintest.nii.gz'))
|
scripts/full-practice-without-googledrive.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.1.0
# language: julia
# name: julia-1.1
# ---
# # Integração de Gauss
#
# Uma abordagem bem genérica para cálculo de integrais é adotar a seguinte estratégia:
#
# $$
# \int_a^b f(x)\:dx \approx \sum_{i=1}^n c_i f(x_i)
# $$
#
# Onde $x_i$ são nós e $c_i$ são os pesos correspondentes. A primeira questão é como calcular estes pesos $c_i$.
#
# Usando interpolação polinomial, a partir dos nós $x_i$, *sempre* se pode construir uma interpolação de Lagrange, ou seja dados um polinômio qualquer $p_{n-1}(x)$ de grau n-1,
#
# $$
# p_n(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_{n-1} x^{n-1} = \sum_{i=1}^{n} p_{n-1}(x_i) h_i(x)
# $$
#
# Nesta equação, $h_i(x)$ é o i-ésimo interpolador de Lagrange. A partir desta aproximação polinomial, a integral pode ser calculada:
#
# $$
# \int_a^b f(x)\:dx \approx \int_a^b p_{n-1}(x)\:dx = \int_a^b \left[\sum_{i=1}^{n} p_{n-1}(x_i) h_i(x)\right] \: dx = \sum_{i=1}^{n} \int_a^b p_{n-1}(x_i) h_i(x)\:dx
# $$
#
# Se o polinômio $p_{x-1}(x)$ interpola a função $f(x)$ nos n nós $x_i$,
#
# $$
# \sum_{i=1}^{n} \int_a^b p_{n-1}(x_i) h_i(x)\:dx = \sum_{i=1}^{n} \int_a^b f(x_i) h_i(x)\:dx = \sum_{i=1}^n c_i f(x_i)
# $$
#
# com
#
# $$
# c_i = \int_a^b h_i(x)\:dx \qquad 1 \le i \le n
# $$
#
#
# **Os pontos $x_i$ são fixados por quem quer calcular a integral
# <!-- TEASER_END -->
# E se escolhermos pontos óptimos de integração, será que conseguimos algo melhor???
#
# Para facilitar as coisas, as integrais serão sempre calculadas em um domínio $-1\le x \le 1$. Para um domínio qualquer,
#
# $$
# \int_a^b f(x)\:dx = \int_{-1}^1 f\left[x(\xi)\right] J(\xi) \:d\xi
# $$
#
# onde $J(\xi)$ é o Jacobiano da transformação e uma transformação do tipo
#
# $$
# x(\xi) = \frac{1-\xi}{2} a + \frac{1+\xi}{2} b
# $$
#
# de modo que
#
# $$
# J(\xi) = \frac{dx}{d\xi} = \frac{b-a}{2}
# $$
# A idéia dop método de Gauss, é encontrar os coeficientes $c_i$ e $x_i$ que maximizam o grau do polinomio interpolador que pode ser integrado *exatamente*:
#
# $$
# \int_{-1}^1 f(x) \:dx = c_1 f(x_1) + c_2 f(x_2)
# $$
#
# Neste caso, existem 4 graus de liberdade: $c_1$, $c_2$, $x_1$ e $x_2$. Um polinômio de grau 3 têm 4 coeficientes. Assim,
#
# $$
# \int_{-1}^1 \left(a_0 + a_1x + a_2x^2 + a_3x^3\right)\:dx = a_0\int_{-1}^1 1\:dx +
# a_1 \int_{-1}^1x \:dx + a_2\int_{-1}^1 x^2\:dx + a_3 \int_{-1}^1 x^3 \:dx
# $$
#
# Integrando exatamente os polinômios 1, $x$, $x^2$ e $x^3$, temos:
#
# $$
# c_1\cdot 1 + c_2\cdot 2 = \int_{-1}^1 1\:dx = 2\\
# c_1\cdot x_1 + c_2\cdot x_2 = \int_{-1}^1 x\:dx = 0\\
# c_1\cdot x_1^2 + c_2\cdot x_2^2 = \int_{-1}^1 x^2\:dx = \frac{2}{3}\\
# c_1\cdot x_1^3 + c_2\cdot x_2^3 = \int_{-1}^1 x^3\:dx = 0\\
# $$
#
# Isto é um sistema de equações algébricas de 4 equações e 4 incógnitas cuja solução é:
#
# $$
# c_1 = 1, \quad c_2=1, \quad x_1 = -\frac{\sqrt{3}}{3}, \quad x_1 = \frac{\sqrt{3}}{3}
# $$
#
# Com estes pesos e nós, integrais de grau 3 ou inferior são calculados exatamente!
#
#
# # Generalização do processo anterior
#
# Este processo poderia ser repetido para polinômios de grau mais alto. Ou eventualmente usando outros tipos de função, fixando alguns nós ou até mesmo fixando alguns pesos. Mas existem formas alternativas de obter os nós e pesos.
#
# A biblioteca `Jacobi` <https://github.com/pjabardo/Jacobi.jl> calcula os pesos e nós para as seguintes integrais:
#
# $$
# \int_{-1}^1 (1-x)^\alpha (1+x)^\beta f(x)\:dx = \sum_{i=1}^n w_i^{\alpha,\beta} f(x_i)
# $$
using Jacobi
using PyPlot
using Polynomials
# # Exemplos
#
#
function gaussxw(n, α, β, ::Type{T}=Float64) where {T<:Number}
z = zgj(n, α, β, T)
w = wgj(z, α, β)
return z, w
end
applyint(g, f) = sum(f.(g[1]) .* g[2])
function calcerr(g, f, Ie)
I = applyint(g, f)
return abs(I-Ie)
end
setprecision(BigFloat, 2048)
nn1 = 2:25
gc1 = gaussxw.(nn1, 0, 0);
nn2 = 2:40
gc2 = gaussxw.(nn2, 0, 0, BigFloat);
# ## $1 + x + x^2 + x^3 + x^4 + x^5$
coefs = [1,1,1,1,1,1]
P1 = Poly(Float64.(coefs))
IP1 = polyint(P1)
Ie1 = IP1(1) - IP1(-1)
err1 = calcerr.(gc1, x->P1(x), Ie1);
loglog(nn1, err1, "bo")
# # Polynomio de grau 19 com coeficientes aleatórios
coefs = randn(20)
P2 = Poly(Float64.(coefs))
IP2 = polyint(P2)
Ie2 = IP2(1) - IP2(-1)
err2 = calcerr.(gc1, x->P2(x), Ie2);
loglog(nn1, err2, "bo")
# ## $\sin nx$
#
# $$
# \int_{-1}^1 \sin n x \:dx = \frac{2\sin n}{n}
# $$
# +
n=4
f(x,n, ::Type{T}=Float64) where T = cos(n*x)
ife(n, ::Type{T}=Float64) where T = 2*sin(n*one(T))/n
xx = -1:0.01:1
plot(xx, f.(xx,n), "k-")
# +
n = 3
Ie3 = ife(n)
err3 = calcerr.(gc1, x->f(x,n), Ie3)
Ie3b = ife(n, BigFloat)
err3b = calcerr.(gc2, x->f(x,n,BigFloat), Ie3b)
loglog(nn2, err3b, "rs")
loglog(nn1, err3, "bo")
# -
# ## $\int_{-1}^1 (1 + x + x^2 + x^3 + x^4 + x^5)(1-x)^2(1+x)^3 \:dx$
#
# ### Gauss normal
# +
P4 = Poly([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0])
f4 = P4 * Poly([1.0, -1])^2 * Poly([1.0, 1.0])^3
IP4 = polyint(f4)
Ie4 = IP4(1.0) - IP4(-1.0)
# -
err4 = calcerr.(gc1, x->f4(x), Ie4);
loglog(nn1, err4, "bo")
# ### Usando os pesos - Gauss-Jacobi
#
#
gc3 = gaussxw.(nn1, 2, 3);
err4b = calcerr.(gc3, x->P4(x), Ie4);
lgj = loglog(nn1, err4, "bo", label="Gauss")
lgjab = loglog(nn1, err4b, "rs", label="Inclui peso")
legend()
# ## Mesmo polinômio mas inclui as extremidades
function gaussxwL(n, α, β, ::Type{T}=Float64) where {T<:Number}
z = zglj(n, α, β, T)
w = wglj(z, α, β)
return z, w
end
gc4 = gaussxwL.(nn1, 0, 0);
gc5 = gaussxwL.(nn1, 2, 3);
err5 = calcerr.(gc4, x->f4(x), Ie4);
err5b = calcerr.(gc5, x->P4(x), Ie4);
lgj = loglog(nn1, err4, "bo", label="Gauss")
lgjab = loglog(nn1, err4b, "rs", label="Inclui peso")
lglj = loglog(nn1, err5, "gx", label="Gauss-Lobatto")
lgjlab = loglog(nn1, err5b, "k.", label="GL Inclui peso")
legend()
# # Podemos incluir apenas uma extremidade
#
# * Não inclui qualquer extremidade: `zgj`, `wgj`
# * Inclui ambas as extremidades: `zglj`, `wglj`
# * Inclui apenas -1: `zgrjm`, `wgrjm`
# * Inclui apenas +1: `zgrjp`, `wgrjp`
#
|
06b-Integracao_de_Gauss.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [udacity](https://github.com/tensorflow/examples/blob/master/courses/udacity_deep_learning/5_word2vec.ipynb)
#
# [学习Tensorflow的Embeddings例子](https://liusida.github.io/2016/11/14/study-embeddings/)
from __future__ import print_function
# %matplotlib inline
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
#from sklearn.manifold import TSNE
# +
def read_data(filename):
"""Extract the first file enclosed in a zip file as a list of words"""
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data("/Users/qtt/workspace/github/word2vec/text8.gz")
print('Word Num %d' % len(words))
print(words[0])
# +
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
#del words # Hint to reduce memory
# +
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
# +
print('data:', [reverse_dictionary[di] for di in data[:8]])
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 0
batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
# -
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
# +
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed,
labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))
# Optimizer.
# Note: The optimizer will optimize the softmax_weights AND the embeddings.
# This is because the embeddings are defined as a variable quantity and the
# optimizer's `minimize` method will by default modify all variable quantities
# that contribute to the tensor it is passed.
# See docs on `tf.train.Optimizer.minimize()` for more details.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
# +
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
# -
|
t_tf/embedding.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/dgoppenheimer/Molecular-Dynamics/blob/main/gromacs_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="QAlZWjc28b9n"
# ## Installation of GROMACS
# + [markdown] id="iKRahtIG8rtE"
# ### Download GROMACS
# + [markdown] id="shEWSzrI81rk"
# These instructions were largely adapted from [Installing Software on Google Colab for IBM3202 tutorials](https://colab.research.google.com/github/pb3lab/ibm3202/blob/master/tutorials/lab00_software.ipynb). I updated the GROMACS version, and to do that I needed to upgrade `cmake`.
#
# <mark>Installation of this software takes about 40 min.</mark> Therefore, we will save the compiled software on Google Drive to save time later.
#
#
# + [markdown] id="hQyXMnCc_Bn-"
# <mark>**VERY IMPORTANT FIRST STEP:**</mark> Go to the Menu → *Runtime* → *Change Runtime Type* and choose GPU!
#
# **Note:** a page reload will be required. This is okay.
# + [markdown] id="huGay-vDIepe"
# First, let's confirm that we are in the correct directory.
# + colab={"base_uri": "https://localhost:8080/"} id="M-HbnMg8SxXq" outputId="bd1e505c-896b-468f-fde7-a8e814faf0bf"
# !pwd
# + id="H18SXwk5rikC" colab={"base_uri": "https://localhost:8080/"} outputId="eb5a1ee9-b94c-4926-c66a-6c47f2a50b72"
#Download GROMACS 2021.5
# !wget https://ftp.gromacs.org/gromacs/gromacs-2021.5.tar.gz
# + [markdown] id="3hk0CS4_-c7m"
# ### Install GROMACS
# + [markdown] id="Tb1NAV6DA57K"
# We will install the software into a pre-defined user directory that will not be deleted when we quit this notebook.
#
# Start a `bash` subshell to run several `bash` commands in the same code cell.
# + colab={"base_uri": "https://localhost:8080/"} id="2-s3lxeEF5WI" outputId="63aa65c7-b7ca-4709-d509-4d66ea1c3e99" language="bash"
# # extracting the software
# tar xfz gromacs-2021.5.tar.gz
# echo "GROMACS extraction completed"
# + id="mjcZA2s5GD4B" colab={"base_uri": "https://localhost:8080/"} outputId="62397978-744c-4b91-81a3-eea216f1561b" language="bash"
# # create and enter the build directory
# cd gromacs-2021.5
# mkdir build
# cd build
# + colab={"base_uri": "https://localhost:8080/"} id="XM7AKR4gQgks" outputId="68e61631-e735-454b-e70b-d50d1a32ab5a"
# check the cmake version
# !cmake --version
# + colab={"base_uri": "https://localhost:8080/"} id="w5cyzVIWRnCR" outputId="8113dff7-2836-46bc-bf0b-1cc75676d5e3"
# !apt remove cmake
# + colab={"base_uri": "https://localhost:8080/"} id="EUBZtuiRRr66" outputId="a87da7df-c3a8-420e-af61-ac76c59bf807"
# !pip install cmake --upgrade
# + colab={"base_uri": "https://localhost:8080/"} id="7M_55g7tR2zW" outputId="d51fd887-1813-425d-ce0c-be670e2d26db"
# !cmake --version
# + colab={"base_uri": "https://localhost:8080/"} id="zgoXB3G0UX1O" outputId="7ac8d299-e814-4bc8-805a-2325e2969b77"
# !pwd
# + colab={"base_uri": "https://localhost:8080/"} id="CyZr8uO0UbXH" outputId="737c5a0e-bd8b-4f09-bfdc-bc45637dc300"
# %cd gromacs-2021.5/build/
# + colab={"base_uri": "https://localhost:8080/"} id="moT_O0yNUsRW" outputId="580c2e23-5195-41fe-942b-f0faa3abf09e"
# !cmake --version
# + colab={"base_uri": "https://localhost:8080/"} id="eWxtseBcUzSX" outputId="3fb40af7-0abb-4091-a69d-ff47a6eaebee"
# !apt remove cmake
# + colab={"base_uri": "https://localhost:8080/"} id="XYqnQ4AaU75x" outputId="91256526-87b4-4f83-a4ff-62b6a2b83278"
# !pip install cmake --upgrade
# + colab={"base_uri": "https://localhost:8080/"} id="56O_7W2IVBFe" outputId="1e3d2495-cd4d-40f7-ad6a-464abed538c8"
# !cmake --version
# + colab={"base_uri": "https://localhost:8080/"} id="KnHphphtHRn-" outputId="958a2c11-c32e-4402-f360-1a6ba3539ebc"
#@title make
# %%bash
# had to change -DGMX_GPU=on to -DGMX_GPU=CUDA.
cmake .. -DGMX_BUILD_OWN_FFTW=ON -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/content/gromacs-2021
# + colab={"base_uri": "https://localhost:8080/"} id="a99Qe0ogVhSf" outputId="8068afaa-2578-4b78-c1fc-0e2d5b2216e4" language="bash"
# make
# # ~20 min?
# + colab={"base_uri": "https://localhost:8080/"} id="nd_00PnZcy2k" outputId="fab9a230-ff18-43dc-86ba-ecf7ec45bb2b" language="bash"
# make check
# # 31 min
# + colab={"base_uri": "https://localhost:8080/"} id="TqpFl_f3l1KO" outputId="f716b279-2773-4ac5-850a-47940269aace" language="bash"
# make install
# + [markdown] id="Iji7m0TCmZJJ"
# We now check that the installation was successful by loading the GROMACS PATH onto Google Colab.
# + colab={"base_uri": "https://localhost:8080/"} id="YWrAiq7DmBIS" outputId="44ff5388-9836-492b-cf52-5300bd314713"
##Checking that GROMACS was successfully installed
# %%bash
source /content/gromacs-2021/bin/GMXRC
gmx -h
# + [markdown] id="f7AITKkAmJQ4"
# <mark>SWEET!</MARK>
# + colab={"base_uri": "https://localhost:8080/"} id="JF5NIzZimzgE" outputId="dc080201-ec0e-403c-f47b-e507a3e175fd"
# !pwd
# + id="W9gkxBMTm2ep" colab={"base_uri": "https://localhost:8080/"} outputId="8ea155eb-7b57-4d71-accb-bcfeee351e12"
# %cd ../../drive/MyDrive
# + id="ERImL_mukPdk" colab={"base_uri": "https://localhost:8080/", "height": 374} outputId="1029a457-b455-46ce-aefa-5245b43fc8c7"
#Copying your compiled GROMACS to your Google Drive
#We will create and/or use the IBM3202 folder to create a folder for compiled programs
import os
import shutil
from pathlib import Path
IBM3202 = Path("/content/drive/MyDrive/IBM3202/")
if os.path.exists(IBM3202):
print("IBM3202 already exists")
if not os.path.exists(IBM3202):
os.mkdir(IBM3202)
print("IBM3202 did not exists and was succesfully created")
#Then, we will copy the compiled GROMACS to this folder
shutil.copytree(str('/content/gromacs-2021'), str(IBM3202/'gromacs-2021'))
# #!cp -d -r /content/gromacs-2021 "$IBM3202"/gromacs-2021
print("GROMACS successfully backed up!")
# + [markdown] id="ifRa0BB8N3gn"
# ## Important Code to Run
# + [markdown] id="_czdoLTkN8v1"
# **Connect to a runtime**
# The code cells below need to be run each time you return to Colab.
#
# + colab={"base_uri": "https://localhost:8080/"} outputId="a1df7b45-de1c-4d56-c3e0-077a59f4948b" id="rCgLIRYmOOVz"
# This gets you into the correct directory
# %cd /content/drive/MyDrive/
# + id="Uz55mD2FO9FG"
# Give permissions to run gmx
# !chmod 755 -R /content/drive/MyDrive/IBM3202/gromacs-2021
# + id="R7dcUk_XPNFY"
# Import stuff for graphing
import pandas as pd
import numpy as np
import plotly.graph_objects as go
import plotly.express as px
from pathlib import Path
# + [markdown] id="fx-BrZBE35xY"
# ## Using GROMACS
# + id="tqvwCXoSt9rc" colab={"base_uri": "https://localhost:8080/"} outputId="ad477e4d-9dc0-4949-8a97-1da3ed028915"
# Checking that our GROMACS works
# %%bash
source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
gmx -h
# + [markdown] id="v_9da5886G9m"
# Need to change path in `GMXRC`. On line 13, change
#
# ```bash
# . /content/gromacs-2021/bin/GMXRC.bash
# ```
# to
# ```bash
# . /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC.bash
# ```
#
# + colab={"base_uri": "https://localhost:8080/"} id="ieF_CEa-7LUP" outputId="d5033eaf-ba88-42c4-e8a2-91b284be5613"
# Checking that our GROMACS works
# %%bash
source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
gmx -h
# + [markdown] id="T7rXLBEj8QTJ"
# Need to fix the path in `GMXRC.bash`. Change line 53 to `GMXPREFIX=/content/drive/MyDrive/IBM3202/gromacs-2021`
# + colab={"base_uri": "https://localhost:8080/"} outputId="62f29420-4c79-448c-c9ca-c221591c6b6e" id="PdgLWRPH81vX"
# Checking that our GROMACS works
# %%bash
source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
gmx -h
# + [markdown] id="Lrrf1Vhl9LEk"
# Ouch!
# + id="sBZMjG6-9O5e"
# !chmod 755 -R /content/drive/MyDrive/IBM3202/gromacs-2021
# + id="CzHIIAIH9cM_"
# Checking that our GROMACS works
# %%bash
source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
gmx -h
# + [markdown] id="y05dHr3Z9fi0"
# <mark>SWEET! Finally!</mark>
#
# Okay, it looks like the saved GROMACS binary is working.
# + [markdown] id="YJDWcaVP3_Dd"
# For this test, we will follow the excellent tutorial by <NAME>, [Lysozyme in water](http://www.mdtutorials.com/gmx/lysozyme/index.html), but we will use a different protein. Here we will use the human prion protein (RCSB ID: 1qLz).
# + colab={"base_uri": "https://localhost:8080/"} id="nqqP9HvQ_fZ1" outputId="9f6e65ee-4ca9-4b57-c646-946b9991b459"
# !wget https://www.rcsb.org/structure/1QLZ
# + [markdown] id="drjUjOxbMUh8"
# ## Preparing Structure Files
# + [markdown] id="ToFRL7-rNYZE"
# The 1QLZ protein structure was solved by NMR, which means that 20 structures were deposited in one `.pdb` file. Here we will use the *stream editor*, `sed`, because `grep` is designed for use on lines of text and we want to collect a block of text<a name="cite_ref-1"></a>[<sup>[1]</sup>](#cite_note-1).
#
# The `.pdb` file has the following format:
#
# ```pdb
# MODEL 1
# ATOM 1 N LEU A 125 4.329 -12.012 2.376 1.00 0.00 N
# ATOM 2 CA LEU A 125 5.029 -10.769 2.674 1.00 0.00 C
# ...
# ENDMDL
# MODEL 2
# ATOM 1 N LEU A 125 5.962 -12.281 -0.586 1.00 0.00 N
# ATOM 2 CA LEU A 125 6.228 -10.948 -0.052 1.00 0.00 C
# ...
# ```
#
# Note that each model is preceded by a line that designates the model number (`MODEL 1`, `MODEL 2`, and so on) and ends with the line `ENDMDL`. We can use these as starting and stopping patterns for each model that we want to extract from this file.
#
# *Regex*, which is short for *Regular Expressions*<a name="cite_ref-2"></a>[<sup>[2]</sup>](#cite_note-2) is used for search and replace of characters in certain text files (but cannot be used for `html` files--see [Stackoverflow](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454)). In our case we want to extract the text between the patterns `MODEL 1` and `ENDMDL` (including the patterns) into a new file. I got the code, below, from [Stackoverflow](https://stackoverflow.com/questions/4857424/extract-lines-between-2-tokens-in-a-text-file-using-bash), but had to modify it for using `sed` on Mac OSX (you need the `-E` option). The `-E` option may or may not be necessary for Colab.
#
# ```bash
# # Here is the sed command
# sed -E -n '/^MODEL +1 /,/^ENDMDL/w 1qLz-model1.pdb' 1qlz.pdb
# ```
#
# #### Explanation of command
#
# - `sed -E` use extended regular expressions
# - `-n` do not echo every line to output
# - `'/START/,/STOP/'` pattern to search for; we start at lines that begin with `MODEL 1` and end with lines that begin with `ENDMDL`
# - `^` is Regex for the beginning of a line
# - `<space> +` search for 1 or more spaces
# - `1 <space>`, search for the number 1 followed by a space (or else you get model 19)
# - `w 1qLz-model1.pdb` write the output to the file `1qLz-model1.pdb`
# - `1qlz.pdb` is the input file
#
# ---
# <a name="cite_note-1"></a>1. It is possible to extract blocks of text using `grep` but it is not as easy as using `sed`.[↩](#cite_ref-1)
#
# <a name="cite_note-2"></a>2. The [Python Regex Cheat Sheet](https://www.geeksforgeeks.org/python-regex-cheat-sheet/) has a list of many common regular expressions and is a useful reference.[↩](#cite_ref-2)
#
#
# + [markdown] id="TqnecKtgzdLY"
# #### Questions
#
# Look at your single-model file. How many amino acids are in the protein? (Hint: you can use `grep` to quickly determine this).
# + id="JwVWrtlav7FH"
# + [markdown] id="u1R59j1dS6-S"
# ## Preparing the Simulation
# + [markdown] id="BwS4NP4CUoFi"
# For this test, I will follow along with the [Molecular Modeling Practical](http://md.chem.rug.nl/~mdcourse/molmod2012/md.html) and with the [Lysozyme in water](http://www.mdtutorials.com/gmx/lysozyme/01_pdb2gmx.html) tutorial.
#
# Note that the protein structure file we are using has no missing loops or other problems.
# + [markdown] id="EXdFw7Y9VdRR"
# ### Structure Conversion And Topology
# + [markdown] id="ypBCwYRGVqZa"
# We want to use the GROMOS 45a3 force field and the SPC water model
#
# To make things easier, rename the `1qLz-model1.pdb` file to `protein.pdb`.
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="LrvdKYWhUiVr" outputId="b7cd029d-5bb4-4a3a-fd06-34adb82aa41c"
# %cd /content/drive/MyDrive/
# + colab={"base_uri": "https://localhost:8080/"} id="kSZWdXGnWbXM" outputId="b99f2f60-2eb3-45ae-b696-0f23119172fc"
# !pwd
# + id="QkqXrs2PXpQ5"
# %mv 1qLz-model1.pdb protein.pdb
# + [markdown] id="0TgrQ5svYAQj"
# Usually when running `pdb2gmx` we interactively select the force field, and the water model, but when using Colab, we have to specify them when running the command.
#
#
# ```bash
# gmx pdb2gmx -f protein.pdb -o protein.gro -p protein.top -ignh -ff -water spce -ff amber99sb-ildn
# ```
# + colab={"base_uri": "https://localhost:8080/"} id="RsSXLTR6ZA8Q" outputId="098ec8a3-3249-4880-a20f-049427536a0a" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx pdb2gmx -f protein.pdb -o protein.gro -p protein.top -ignh -water spce -ff oplsaa
# + [markdown] id="vgmGyIlvcLyx"
# #### Questions
# + [markdown] id="D6HeuU6fcPDu"
# Write down the number of atoms before and after the conversion and explain the difference.
#
# List the atoms, atom types and charges from a tyrosine residue as given in the topology file
# + [markdown] id="7EsW0ZlVcWm4"
# ### Energy Minimization
# + [markdown] id="Grse9SUCcosZ"
# Use the `minim.mdp` file from [here](http://md.chem.rug.nl/~mdcourse/molmod2012/minim.mdp). Transfer it to your `/content/drive/MyDrive` directory.
#
# NOTE: this `.mdp` file causes a fatal error with the current version of GROMACS.
#
# ```bash
# # # %%bash
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx grompp -f minim.mdp -c protein.gro -p protein.top -o protein-EM-vacuum.tpr
# ```
#
#
# + id="WWL-STCEeB0w" language="bash"
# # this had a fatal error
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx grompp -f minim.mdp -c protein.gro -p protein.top -o protein-EM-vacuum.tpr
# + [markdown] id="SWB2ZbzYfdT2"
# ### Solvation
# + [markdown] id="HQ25ZH4lfqs-"
# The below is from <NAME>'s [Lysozyme in water](http://www.mdtutorials.com/gmx/lysozyme/03_solvate.html) tutorial.
#
# ```bash
# gmx editconf -f protein.gro -o protein_newbox.gro -c -d 1.0 -bt cubic
# ```
#
# + colab={"base_uri": "https://localhost:8080/"} id="g7_hmHcEgEJY" outputId="2e896f83-ebb9-403e-fe80-93749b2483bc" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx editconf -f protein.gro -o protein_newbox.gro -c -d 1.0 -bt cubic
# + colab={"base_uri": "https://localhost:8080/"} id="D5yfr1QtguLw" outputId="0e01e278-87b8-4004-e99b-162f86ffbf3d" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx solvate -cp protein_newbox.gro -cs spc216.gro -o protein_solv.gro -p protein.top
# + [markdown] id="RYCukHlRhdTN"
# ### Adding Ions
# + [markdown] id="wKHFGZOhhgD8"
# Use the `.mdp` file from [here](http://www.mdtutorials.com/gmx/lysozyme/Files/ions.mdp).
#
# ```bash
# gmx grompp -f ions.mdp -c protein_solv.gro -p protein.top -o ions.tpr
# ```
# + colab={"base_uri": "https://localhost:8080/"} id="WYAPRTPUjXh7" outputId="5183e030-4a3b-463c-dda6-d742be02574d" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx grompp -f ions.mdp -c protein_solv.gro -p protein.top -o ions.tpr
# + colab={"base_uri": "https://localhost:8080/"} id="n981mY1Lj241" outputId="2f2cc7f8-6805-4726-e5f8-e647fba1b173" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx genion -s ions.tpr -o protein_solv_ions.gro -p protein.top -pname NA -nname CL -neutral
# + [markdown] id="9N__Z10TmeDU"
# [Simulating User Interaction In Gromacs in Bash](https://stackoverflow.com/questions/45885541/simulating-user-interaction-in-gromacs-in-bash)
#
# Note the `echo 13 | gmx genion ...` should work too.
# + colab={"base_uri": "https://localhost:8080/"} id="y-bTw7R5mC9m" outputId="548f4ee7-cc89-438f-d5ac-baef60a517fc"
# try this
# %%bash
source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
gmx genion -s ions.tpr -o protein_solv_ions.gro -p protein.top -pname NA -nname CL -neutral <<EOF
13
EOF
# + [markdown] id="6TCF1kfqnHSF"
# ### Energy Minimization
# + [markdown] id="SuaUGqT2nK3m"
#
# + colab={"base_uri": "https://localhost:8080/"} id="T1xLBKf0nrWb" outputId="92ae1d7b-8ade-4131-c8da-88a4f2b37cbd" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx grompp -f minim.mdp -c protein_solv_ions.gro -p protein.top -o em.tpr
# + colab={"base_uri": "https://localhost:8080/"} id="nux-DugyoISL" outputId="65e144ba-70dd-47f0-be22-82a069496e10" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx mdrun -v -deffnm em
# + colab={"base_uri": "https://localhost:8080/"} id="eKDeHQ_rpOPM" outputId="2aa05d62-1e75-499b-d690-cc583a48dec2" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx energy -f em.edr -o potential.xvg <<EOF
# 10 0
# EOF
# + [markdown] id="3YHsCUSgriFV"
# ```py
# import numpy as np
# import matplotlib.pyplot as plt
#
# x, y = [], []
#
# with open("data.xvg") as f:
# for line in f:
# cols = line.split()
#
# if len(cols) == 2:
# x.append(float(cols[0]))
# y.append(float(cols[1]))
#
#
# fig = plt.figure()
# ax1 = fig.add_subplot(111)
# ax1.set_title("Plot title...")
# ax1.set_xlabel('your x label..')
# ax1.set_ylabel('your y label...')
# ax1.plot(x,y, c='r', label='the data')
# leg = ax1.legend()
# plt.show()
# ```
#
# also
#
# ```py
# x,y = np.loadtxt("file.xvg",comments="@",unpack=True)
# plt.plot(x,y)
# ```
# + colab={"base_uri": "https://localhost:8080/"} id="VC7AGWf7dOwI" outputId="17ea4e41-e943-4476-972e-8d8d3920cdb5"
# %cd /content/drive/MyDrive/
# + id="5pImz7SZc8PA"
# import stuff
import plotly.graph_objects as go
import pandas as pd
import numpy as np
import plotly.express as px
from pathlib import Path
# + id="f-noDWCafB-1"
# Remove the comments from the xvg file and write a new file
# !grep -v -e '\#' -e '\@' potential.xvg > potential.csv
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="YoOXZN1PdV_N" outputId="70f86c5a-d5b4-4417-9c02-dc83858d7f1c"
# create the graph
df = pd.read_csv('potential.csv',
sep='\s\s+', engine='python')
df.columns = ["Time (ps)", "kJ/mol"] # add headers to columns
fig = px.line(df, x="Time (ps)", y="(kJ/mol)")
fig.update_layout(width=700, title_text="GROMACS Energies")
fig.show()
# + [markdown] id="rJjAM-MalSHl"
# Let's see if I can do it another way.
#
# ```py
# data = np.loadtxt('potential.xvg',comments=['#', '@'])
# ```
#
# I want to try to import the data without having to `grep` it into a new file.
# + colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="1f7621c7-d0de-46fe-f80f-91e9d420cc00" id="Xm7ld47qmQhR"
# create the graph
import numpy as np
import pandas as pd
a = np.loadtxt('potential.xvg',comments=['#', '@']) # strip out the comments
header = ["x", "y"] # add headers to columns
frame = pd.DataFrame(a, columns=header)
fig = px.line(frame, x="x", y="y")
fig.update_xaxes(title_text="Time (ps)") # label x-axis
fig.update_yaxes(title_text="kJ/mol") # label y-axis
fig.update_layout(width=700, title_text="GROMACS Energies")
fig.show()
# + [markdown] id="fHWEeEMCvwL7"
# ## Equilibration
# + [markdown] id="oyJ0ZrRowlDw"
# Upload the `nvt.mdp` file to Colab.
# + id="wD-bgq0vxMtl"
# !chmod 755 -R /content/drive/MyDrive/IBM3202/gromacs-2021
# + colab={"base_uri": "https://localhost:8080/"} id="GACTYvWbwul-" outputId="77cdb5a7-be3f-486b-e4ae-17e84b671402" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx grompp -f nvt.mdp -c em.gro -r em.gro -p protein.top -o nvt.tpr
# + colab={"base_uri": "https://localhost:8080/"} id="p_6Yz1XpxVIT" outputId="1406ca09-e396-4c35-bd89-972506aabebf" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx mdrun -deffnm nvt
#
# # this took 1 min to run instead of 1 hr
# + [markdown] id="1VSamgdR1LtA"
# Plot the energy.
# + colab={"base_uri": "https://localhost:8080/"} id="Kq-T6im11OJ8" outputId="e09d3bfe-2fc9-4bea-8aa1-864a1a6b68ea" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx energy -f nvt.edr -o temperature.xvg <<EOF
# 16 0
# EOF
# + [markdown] id="bz8SsNTA3zCi"
# Let's plot the temperature vs time. First we'll look at the file.
# + colab={"base_uri": "https://localhost:8080/"} id="jI30SDnU35US" outputId="2f4924ef-69ef-4367-a076-604f3099643b"
# glance at the file
# !head -20 temperature.xvg
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="6s6SdYel5OU5" outputId="ee052cde-2907-4a41-da14-c1733de2d9e6"
# plot the graph
b = np.loadtxt('temperature.xvg',comments=['#', '@']) # strip out the comments
header = ["x", "y"] # add headers to columns
frame2 = pd.DataFrame(b, columns=header)
fig2 = px.line(frame2, x="x", y="y")
fig2.update_xaxes(title_text="Time (ps)") # label x-axis
fig2.update_yaxes(title_text="K") # label y-axis
fig2.update_layout(width=700, title_text="GROMACS Energies")
fig2.show()
# + [markdown] id="bKW6NIj25vHp"
# ## Equilibration Part 2
# + [markdown] id="Uz1g5ois5zIW"
#
# + colab={"base_uri": "https://localhost:8080/"} id="Xmj7aKmx6jSP" outputId="e9c65786-b278-4bbf-d563-e49081b58120" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx grompp -f npt.mdp -c nvt.gro -r nvt.gro -t nvt.cpt -p protein.top -o npt.tpr
# + colab={"base_uri": "https://localhost:8080/"} id="WJJipBgN6tnh" outputId="9d349ffc-d202-4d11-985a-37c04f3ede63" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx mdrun -deffnm npt
# + colab={"base_uri": "https://localhost:8080/"} id="oMW9p2P28MQw" outputId="e10d9b8d-382e-417c-cda9-c889adcb310d" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx energy -f npt.edr -o pressure.xvg <<EOF
# 18 0
# EOF
# + id="TDGqQfbN8rNp"
# !head -20 pressure.xvg
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="RHzOp-4R8y69" outputId="88a2d07e-52f9-41a1-8c57-480e4c8b6f81"
# plot the graph
c = np.loadtxt('pressure.xvg',comments=['#', '@']) # strip out the comments
header = ["x", "y"] # add headers to columns
frame3 = pd.DataFrame(c, columns=header)
fig3 = px.line(frame3, x="x", y="y")
fig3.update_xaxes(title_text="Time (ps)") # label x-axis
fig3.update_yaxes(title_text="bar") # label y-axis
fig3.update_layout(width=700, title_text="GROMACS Energies")
fig3.show()
# + [markdown] id="Gpt6uKo-CUAD"
# From [this site](https://stackoverflow.com/questions/55512643/set-up-multiple-subplots-with-moving-averages-using-cufflinks-and-plotly-offline)
# + [markdown] id="Pz6yMl-eCd1E"
#
#
# ```py
# df = cf.datagen.lines().iloc[:,0:4]
# df.columns = ['StockA', 'StockB', 'StockC', 'StockD']
#
# # Function for moving averages
# def movingAvg(df, win, keepSource):
# """Add moving averages for all columns in a dataframe.
#
# Arguments:
# df -- pandas dataframe
# win -- length of movingAvg estimation window
# keepSource -- True or False for keep or drop source data in output dataframe
#
#
# ```
# + id="Odhymy6fPJxn"
# Remove the comments from the xvg file and write a new file
# !grep -v -e '\#' -e '\@' pressure.xvg > pressure.csv
# + [markdown] id="1JRt9AU5Wpra"
# From [this site](https://www.geeksforgeeks.org/how-to-calculate-moving-average-in-a-pandas-dataframe/)
# + colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="d85bf661-b4d1-4585-f415-f44709bce523" id="tqYNWvCZVrG0"
# moving average
c = np.loadtxt('pressure.xvg',comments=['#', '@']) # strip out the comments
header = ["x", "y"] # add headers to columns
frame3 = pd.DataFrame(c, columns=header)
# updating our dataFrame to have only
# one column 'Close' as rest all columns
# are of no use for us at the moment
# using .to_frame() to convert pandas series
# into dataframe.
frame3b = frame3['y'].to_frame()
# calculating simple moving average
# using .rolling(window).mean() ,
# with window size = 30
frame3b['ma10'] = frame3b['y'].rolling(10).mean()
fig3 = px.line(frame3, x="x", y="y")
fig3.update_traces(line=dict(color = 'dodgerblue'),
name="pressure")
fig3b = px.line(frame3b, y="ma10")
fig3b.update_traces(line=dict(color = 'firebrick'),
name="10ps average")
fig3c = go.Figure(data=fig3.data + fig3b.data)
fig3c.update_layout(width=700, title_text="Pressure <br>NPT Equilibration")
fig3c.update_xaxes(title_text="Time (ps)") # label x-axis
fig3c.update_yaxes(title_text="bar") # label y-axis
fig3c.update_traces(showlegend=True)
fig3c.show()
# + [markdown] id="03pfw2TDeZMP"
# Let's look at density.
# + colab={"base_uri": "https://localhost:8080/"} id="RkUczL7qecOI" outputId="ddfe64d5-c92f-43d9-8228-57d85b85f0da" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx energy -f npt.edr -o density.xvg <<EOF
# 24 0
# EOF
# + colab={"base_uri": "https://localhost:8080/"} id="CExJjf0PgL77" outputId="3ad54434-752d-4d71-bc3e-cbade7e9f390"
# !head -20 density.xvg
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="IBJl7j_EfBIF" outputId="79bc6c0a-c597-4387-f71b-2f839ff26f66"
# let's plot this
# moving average
c = np.loadtxt('density.xvg',comments=['#', '@']) # strip out the comments
header = ["x", "y"] # add headers to columns
frame4 = pd.DataFrame(c, columns=header)
# updating our dataFrame to have only
# one column 'Close' as rest all columns
# are of no use for us at the moment
# using .to_frame() to convert pandas series
# into dataframe.
# frame4b = frame4['y'].to_frame()
# calculating simple moving average
# using .rolling(window).mean() ,
# with window size = 30
frame4b['ma10'] = frame4b['y'].rolling(10).mean()
fig4 = px.line(frame4, x="x", y="y")
fig4.update_traces(line=dict(color = 'dodgerblue'),
name="kg/m<sup>3</sup>")
fig4b = px.line(frame4b, y="ma10")
fig4b.update_traces(line=dict(color = 'firebrick'),
name="10 ps running average")
fig4c = go.Figure(data=fig4.data + fig4b.data)
fig4c.update_layout(width=700, title_text="Density<br>NPT Equilibration")
fig4c.update_xaxes(title_text="Time (ps)") # label x-axis
fig4c.update_yaxes(title_text="kg/m<sup>3</sup>") # label y-axis
fig4c.update_traces(showlegend=True)
fig4c.show()
# + [markdown] id="c7K_ZkDlkUa1"
# ## Production MD Simulation
# + [markdown] id="1IT9_cf0mMkJ"
#
# + colab={"base_uri": "https://localhost:8080/"} id="QqkvLI9XmoAp" outputId="0c82efc0-aaa9-4fd8-c3cb-253d1db421d9" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx grompp -f md.mdp -c npt.gro -t npt.cpt -p protein.top -o md_0_1.tpr
# + colab={"base_uri": "https://localhost:8080/"} id="eVG6ZhQOotc7" outputId="f0d39248-8127-4e48-b7ac-666cc811218d" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx mdrun -deffnm md_0_1 -nb gpu
# + [markdown] id="3zKrI-AWsbQx"
# ## Analysis
# + [markdown] id="cjUwzTfKsdsd"
# 1000 ps took 11 min.
# + colab={"base_uri": "https://localhost:8080/"} id="XQTmX44CstCz" outputId="dddf521b-e019-4548-dada-d66e2dab19c8" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx trjconv -s md_0_1.tpr -f md_0_1.xtc -o md_0_1_noPBC.xtc -pbc mol -center <<EOF
# 1
# 0
# EOF
# + [markdown] id="9ebpL4ieQWTs"
# ### RMSD
# + colab={"base_uri": "https://localhost:8080/"} id="5Gwu5mTPt-VC" outputId="6f773339-4de6-4eb4-e3a8-ea3de3f12957" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx rms -s md_0_1.tpr -f md_0_1_noPBC.xtc -o rmsd.xvg -tu ns <<EOF
# 4
# 4
# EOF
# + colab={"base_uri": "https://localhost:8080/"} id="DMzdYj14u1Nh" outputId="57c37a63-2a40-4046-8fe9-ce82230dc1ea"
# !head -20 rmsd.xvg
# + colab={"base_uri": "https://localhost:8080/"} id="g48FHkz3BLBS" outputId="3cfae4e9-ab41-4fe7-ca04-dc95cb154ce7"
# %cd /content/drive/MyDrive/
# + colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="41062e8e-6afc-4453-ac06-1bcc5e5e266b" id="EjJNz98gAOK1"
# RMSD
import plotly.graph_objects as go
import pandas as pd
import numpy as np
import plotly.express as px
from pathlib import Path
c = np.loadtxt('rmsd.xvg',comments=['#', '@']) # strip out the comments
header = ["x", "y"] # add headers to columns
frame5a = pd.DataFrame(c, columns=header)
d = np.loadtxt('rmsd_xtal.xvg',comments=['#', '@']) # strip out the comments
header = ["x", "y"] # add headers to columns
frame5b = pd.DataFrame(d, columns=header)
fig5a = px.line(frame5a, x="x", y="y")
fig5a.update_traces(line=dict(color = 'dodgerblue'),
name="RMSD")
fig5b = px.line(frame5b, x="x", y="y")
fig5b.update_traces(line=dict(color = 'firebrick'),
name="crystal RMSD")
fig5 = go.Figure(data=fig5a.data + fig5b.data)
fig5.update_layout(width=700, title_text="RMSD")
fig5.update_xaxes(title_text="Time (ps)") # label x-axis
fig5.update_yaxes(title_text="RMSD") # label y-axis
fig5.update_traces(showlegend=True)
fig5.show()
# + [markdown] id="aIl_QgoO03XZ"
# #### Questions
#
# See [PHY542: MD analysis with VMD tutorial](https://becksteinlab.physics.asu.edu/pages/courses/2017/PHY542/practicals/md/dynamics/rmsd_fitting.html)
#
# Questions:
#
# - What is the maximum RMSD?
# - How does the RMSD change when you include
# - all atoms?
# - all heavy atoms (i.e., do not use hydrogens)?
# - Does your RMSD result depend on the previous superposition step?
# - Is the result consistent with the previous result of your RMSD analysis of the static structures?
# + id="XGR4tHHLvD7J"
# !chmod 755 -R /content/drive/MyDrive/IBM3202/gromacs-2021
# + colab={"base_uri": "https://localhost:8080/"} id="DyiYINrWAL4u" outputId="28ba1c9f-7475-4484-bbb8-65b73991f4be" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx rms -s em.tpr -f md_0_1_noPBC.xtc -o rmsd_xtal.xvg -tu ns <<EOF
# 4
# 4
# EOF
# + [markdown] id="77DXAEwIQe7O"
# ### Radius of Gyration
# + colab={"base_uri": "https://localhost:8080/"} id="YGup94RnEuLI" outputId="cd7d5c54-a22c-4808-f6da-57f9f4b84da6" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx gyrate -s md_0_1.tpr -f md_0_1_noPBC.xtc -o gyrate.xvg <<EOF
# 1
# EOF
# + id="wN8s1AksFHRI"
# + colab={"base_uri": "https://localhost:8080/", "height": 542} outputId="96b943ac-6868-441a-a4ee-1e6947f0412a" id="Mjd_HhzKFHqb"
#@title Radius of Gyration
e = np.loadtxt('gyrate.xvg',comments=['#', '@']) # strip out the comments
header = ["t", "rg", "x", "y", "z"] # add headers to columns
frame6 = pd.DataFrame(e, columns=header)
fig6 = px.line(frame6, x="t", y="rg")
fig6.update_traces(line=dict(color = 'dodgerblue'),
name="radius of gyration"
)
fig6.update_layout(width=700, title_text="Radius of Gyration")
fig6.update_xaxes(title_text="Time (ps)")
fig6.update_yaxes(range=[1.40, 1.60])
fig6.update_yaxes(title_text="R<sub>g</sub> (nm)") # label y-axis
fig6.update_traces(showlegend=True)
fig6.show()
# + colab={"base_uri": "https://localhost:8080/"} id="uQ_dKcorF2UI" outputId="ddec6ba4-9642-4446-b2f1-876c2e6064f3"
# !head -40 gyrate.xvg
# + [markdown] id="ORczByX-T3w2"
# ### Ramachandran
# + id="Tibi0uLCT9Hy" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx rama -s md_0_1.tpr -f md_0_1_noPBC.xtc -o rama.xvg
# + colab={"base_uri": "https://localhost:8080/"} id="q4_vm2X0JflH" outputId="4919c171-d83e-4bb0-d3b6-fe656c26407a" language="bash"
# source /content/drive/MyDrive/IBM3202/gromacs-2021/bin/GMXRC
# gmx chi -s md_0_1.tpr -f md_0_1_noPBC.xtc -rama -o chi.xvg
# + colab={"base_uri": "https://localhost:8080/"} id="pQmL1A2WKM1j" outputId="19e38f38-1752-4b1b-e919-6d079bf617b7"
# !head -150 /content/drive/MyDrive/ramaPhiPsiVAL85.xvg
# + colab={"base_uri": "https://localhost:8080/"} id="8S3XOC_EUyIn" outputId="40dd6677-3028-46ea-8466-f6ee046d261f"
# !head -40 rama.xvg
# + colab={"base_uri": "https://localhost:8080/"} id="g4-6YJsu0Xci" outputId="896b46b9-1458-4596-cfd6-a3e10e039d72"
# !head -150 rama.xvg
# + id="eleI_jvlt_8I"
# !grep -v -e '\#' -e '\@' rama.xvg > rama.txt
# + id="r6TCojJLK2pk"
# !grep -v -e '\#' -e '\@' chi.xvg > chi.txt
# + colab={"base_uri": "https://localhost:8080/"} id="qZXXUIAKK92x" outputId="f946b15d-cd08-48c8-b7f0-81c4fd1d9a1e"
h = pd.read_csv('chi.txt', header=None, sep='\s\s+', engine='python')
print(h)
# + colab={"base_uri": "https://localhost:8080/"} id="EpRLUw8muuM5" outputId="23058563-2c9b-4154-d0a7-a777c3b67bb2"
# !head -5 rama.txt
# + colab={"base_uri": "https://localhost:8080/"} id="gwsLIc9LVVz_" outputId="51c3a60d-4f05-4443-cd80-3ee0591d0a8b"
e = pd.read_csv('rama.txt', header=None, sep='\s\s+', engine='python')
# e = pd.read_csv('rama.xvg',comments=['#', '@'])
# e = np.loadtxt('rama.xvg',
# comments=['#', '@'], # strip out the comments
# dtype='object')
# e.columns=["phi", "psi", "aa"]
print(e)
# header = ["phi", "psi", "aa"] # add headers to columns
# frame7 = pd.DataFrame(e, columns=header)
# e.columns = ["phi", "psi", "aa"]
# e.head()
# frame7 = pd.DataFrame(e.values, columns=header)
# frame7
# fig7 = px.scatter(frame7, x="phi", y="psi",
# hover_name="aa"
# )
# names = [_ for _ in 'abcdef']
# df = pd.DataFrame(A, index=names, columns=names)
# fig5b = px.line(frame5b, x="x", y="y")
# fig5b.update_traces(line=dict(color = 'firebrick'),
# name="crystal RMSD")
# fig5 = go.Figure(data=fig5a.data + fig5b.data)
# fig5.update_layout(width=700, title_text="RMSD")
# fig7.update_xaxes(title_text="Phi") # label x-axis
# fig7.update_yaxes(title_text="Psi") # label y-axis
# fig7.update_traces(showlegend=True)
# fig7.show()
# + colab={"base_uri": "https://localhost:8080/"} id="AbACjablvSLY" outputId="812727a5-637f-4eb2-e29a-2349ad7e6bff"
# !head -5 rama.txt
# + [markdown] id="6fT6z-kUWfDh"
# All of the rama data for each time frame is in a single file with no demarkation for each time point. I can use `sed` to add a new column.
#
# I can use `sed` to identify the chunks that represent each time point. Start with `GLY-126` and end with `GLN-227`.
#
# UPDATE:
#
# I can use `gmx chi` to gather the `phi/psi` angles for each amino acid as a time series in a separate file.
#
# - I can move them all to their own directory.
# - Or define the directory in the `-o` flag.
# - Concatenate them as I create a dataframe.
# - At some point I need to extract the amino acid number (from the filename) and put it into the file as a new column.
# - I also need to fix the amino acid numbers as they are offset by 126. The orginial protein starts at aa 126, but the `chi` command renumbered them as it created the filenames.
# - Then plot all of them on the same plot as a time series.
#
# This is better than trying to parse the `rama.xvg` file.
# + [markdown] id="N9R7wHWsRbbm"
# Let's give it a whirl.
#
# ```py
#
# files=(*.xvg)
#
#
#
# ```
#
# `grep "string" "${files[@]}"`
# will expand to:
# `grep "string" "1.txt" "2.txt" "3.txt"`
#
#
# from [this site](https://unix.stackexchange.com/questions/550964/grep-over-multiple-files-redirecting-to-a-different-filename-each-time)
#
# ```bash
# for f in *-QTR*.tsv
# do
# grep 8-K < "$f" > "${f:0:4}"Q"${f:8:1}".txt
# done
# ```
#
# - the first four characters of the filename -- the year
# - the letter Q
# - the 9th character of the filename -- the quarter
#
# In my case, I want to add part of the filename (the amino acid name and number) to a column in the table.
#
# From [this site](https://stackoverflow.com/questions/41857659/python-pandas-add-filename-column-csv)
#
# ```py
# import os
#
# for csv in globbed_files:
# frame = pd.read_csv(csv)
# frame['filename'] = os.path.basename(csv)
# data.append(frame)
# ```
#
# from [this site](https://stackoverflow.com/questions/51845613/adding-columns-to-dataframe-based-on-file-name-in-python)
#
# ```py
# import pandas as pd
#
# #load data files
# data1 = pd.read_csv('C:/file1_USA_Car_1d.txt')
# data2 = pd.read_csv('C:/file2_USA_Car_2d.txt')
# data3 = pd.read_csv('C:/file3_USA_Car_1m.txt')
# data4 = pd.read_csv('C:/file3_USA_Car_6m.txt')
# data5 = pd.read_csv('C:file3_USA_Car_1Y.txt')
#
# df = pd.DataFrame()
#
# print(df)
#
# df = data1
#
# ---
#
# import glob
# import pandas as pd
#
# df_list = []
# for file in glob.glob('C:/file1_*_*_*.txt'):
# # Tweak this to work for your actual filepaths, if needed.
# country, typ, dur = file.split('.')[0].split('_')[1:]
# df = (pd.read_csv(file)
# .assign(Country=country, Type=typ, duration=dur))
# df_list.append(df)
#
# df = pd.concat(df_list)
# ```
#
# Probably best to use `mv` to batch rename the files.
#
# from [this site](
#
#
# ```bash
# rename -n 's/<search for>/<replace with>/' <target files>
# ```
#
# `-n` perform a dry run and show what output would look like
# `s/` perform a substitution
# `<search for>` what you want to replace--can use regex
# `<replace with>` self explanatory
# `/'` need a closing slash and command needs to be wrapped in single or double quotes.
# `<target files>` files to rename--can use wildcards.
#
#
# from [this site](https://stackoverflow.com/questions/32042019/ubuntu-bulk-file-rename)
#
# ```bash
# rename -n "s/ramaPhiPsi//" ramaPhiPsi*
# ```
#
# <mark>This works!</mark>
#
# Now we can use Pandas to put a column in each file that contains the filename, which we will use in plotly on hover.
#
# From [this site](https://stackoverflow.com/questions/42756696/read-multiple-csv-files-and-add-filename-as-new-column-in-pandas) we see how to add filenames to the `.xvg` files, then split off the `.xvg` extension leaving only the amino acid name and number.
#
# **Need to move files into their own directory first.**
#
# ```bash
# # # %mkdir rama
# # # %mv ramaPhiPsi*.xvg rama/
# ```bash
# files = glob.glob('samples_for_so/*.csv')
# print (files)
# #['samples_for_so\\a.csv', 'samples_for_so\\b.csv', 'samples_for_so\\c.csv']
#
#
# df = pd.concat([pd.read_csv(fp).assign(New=os.path.basename(fp)) for fp in files])
# ```
#
# + id="5HnUP8bKaDC5"
# !rename -n "s/ramaPhiPsi//" ramaPhiPsi*
# + id="kZ3QoRd3ketM"
# make a new directory
# %mkdir rama
# + id="fm6G3rpNyMim"
# move all the rama files into the new directory
# %mv ramaPhiPsi*.xvg rama/
# + colab={"base_uri": "https://localhost:8080/"} id="Bcr7HbwYyh-T" outputId="2b3edeaf-1c1f-49c5-89e9-27f8755e39f6"
# move into the new directory
# %cd rama
# + id="2rwq6trhyoz_"
# rename the files
# !rename "s/ramaPhiPsi//" ramaPhiPsi*
# + id="yNZ3VxfLzUT_" language="bash"
# for f in *.xvg
# do
# base=${f%%.xvg}
# grep -v -e '\#' -e '\@' "$f" > "${base%%.*}".csv
# done
#
# # This worked perfectly
# # ready for adding columns with filename and then splitting
# + id="UTam280fOZ8c"
# get rid of the .xvg files
# %rm *.xvg
# + [markdown] id="69ydDW5RKOl0"
# from [this site](https://tldp.org/LDP/abs/html/parameter-substitution.html)
#
# >`${var%%Pattern}` Remove from `$var` the longest part of `$Pattern` that matches the back end of `$var`.
#
# Wow. Is this not the most esoteric bash stuff?
# + colab={"base_uri": "https://localhost:8080/"} id="tnPWEUUa7P7q" outputId="8fd6266a-2c18-4416-aefb-3bc9c6e29d1d"
# !head -10 ALA100.csv
# + id="rPw4x8ExJRRn"
# !head -50 ALA100.xvg
# + id="xh05l6t57a0d"
# %rm ramaX1X2*.xvg
# + [markdown] id="GJtJlRQvO3zg"
# #### Adding filename column and splitting csv files
#
# from [this site](https://stackoverflow.com/questions/42756696/read-multiple-csv-files-and-add-filename-as-new-column-in-pandas)
#
# ```py
# import pandas as pd
# import glob, os
#
#
# files = glob.glob('samples_for_so/*.csv')
# print (files)
# #['samples_for_so\\a.csv', 'samples_for_so\\b.csv', 'samples_for_so\\c.csv']
#
#
# df = pd.concat([pd.read_csv(fp).assign(New=os.path.basename(fp)) for fp in files])
# print (df)
# a b c d New
# 0 0 1 2 5 a.csv
# 1 1 5 8 3 a.csv
# 0 0 9 6 5 b.csv
# 1 1 6 4 2 b.csv
# 0 0 7 1 7 c.csv
# 1 1 3 2 6 c.csv
# ```
#
# This did not work well for my files. I think I need to add headers to my files at some point.
#
# But the splitting will probably work.
#
# Splitting:
#
# ```py
# files = glob.glob('samples_for_so/*.csv')
# df = pd.concat([pd.read_csv(fp).assign(New=os.path.basename(fp).split('.')[0])
# for fp in files])
# print (df)
# a b c d New
# 0 0 1 2 5 a
# 1 1 5 8 3 a
# 2 0 9 6 5 b
# 3 1 6 4 2 b
# 4 0 7 1 7 c
# 5 1 3 2 6 c
# ```
#
#
# + id="eD3bYVKCPTnx"
# adding filename column
import pandas as pd
import glob, os
files = glob.glob('*.csv')
# print (files)
# + [markdown] id="oLOs1nXtRbpd"
# Adding filenames as a column
#
# from [this site](https://stackoverflow.com/questions/41857659/python-pandas-add-filename-column-csv)
#
# ```py
# import pandas as pd
# import glob
#
# globbed_files = glob.glob("*.csv") #creates a list of all csv files
#
# for csv in globbed_files:
# frame = pd.read_csv(csv, sep='^') # or other separator
# frame['filename'] = os.path.basename(csv)
# data.append(frame)
# ```
# + id="DWqngsZRRan9"
import pandas as pd
import glob
globbed_files = glob.glob("*.csv")
# print(globbed_files)
data = []
for csv in globbed_files:
frame = pd.read_csv(csv, sep='\s+') # or other separator
frame['filename'] = os.path.basename(csv)
data.append(frame)
# + [markdown] id="XhCwojrQYbMW"
# #### Adding headers
# + [markdown] id="W9j2QxI-YeAG"
# Try like earlier
# + colab={"base_uri": "https://localhost:8080/"} id="xpRsDUTSTnjd" outputId="7a7e1395-7faf-44fb-d676-b56a16a999f9"
import os
import pandas as pd
import glob
globbed_files = glob.glob("ALA100.csv")
# print(globbed_files)
for csv in globbed_files:
testframe = pd.read_csv(csv, header=None, sep='\s\s+', engine='python')
testframe.columns=["phi", "psi"]
testframe['filename'] = os.path.basename(csv)
data.append(testframe)
print(testframe)
# worked like a charm!
# + [markdown] id="DRChC3BVeBN7"
# #### Splitting filename
# + colab={"base_uri": "https://localhost:8080/"} id="EhkSm4M6eOVH" outputId="361029fd-666c-4d8c-f215-2cedb8cdc0cc"
# making a new dataframe from splitting the column
testframe2 = testframe['filename'].str.split('.', expand = True)
print(testframe2)
# + colab={"base_uri": "https://localhost:8080/"} id="m5f17cnpmTdx" outputId="7e4601d7-f208-4a31-e6cf-4d227f18ef94"
# adding the separate amino acid column from the new data frame
testframe["aa"]= testframe2[0]
print(testframe)
# + [markdown] id="toJv8onMY-IM"
# So far, so good.
#
# Next, see [this site](https://jonathansoma.com/lede/foundations-2017/classes/working-with-many-files/class/).
# + colab={"base_uri": "https://localhost:8080/"} id="DumobL_H0b4l" outputId="e9988b26-94ce-454d-b05f-3b56bf539af4"
print(testframe)
# + colab={"base_uri": "https://localhost:8080/"} id="4X1uRxYQz3ip" outputId="9a129045-ae2f-48e4-f1e2-881099cf6376"
# Add new column to the DataFrame
testframe['time'] = (range(0, 1010, 10)) # range starting at 0 ending at 1000 with a stepsize of 10.)
print(testframe)
# + [markdown] id="3t5E3D2gh_Pt"
# ### Creating the Ramachandran Plot
# + colab={"base_uri": "https://localhost:8080/", "height": 717} id="l37L0xULvwXg" outputId="3d247566-6b61-47d7-d29f-ce8a298f351f"
#@title Ramachandran Plot
import plotly.express as px
import plotly.graph_objects as go
import pandas as pd
# Create dataframes for each file
dfgenaLLowed1 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-aLLowed1.csv')
dfgenaLLowed2 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-aLLowed2.csv')
dfgenaLLowed3 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-aLLowed3.csv')
dfgenaLLowed4 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-aLLowed4.csv')
dfgenaLLowed5 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-aLLowed5.csv')
dfgenaLLowed6 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-aLLowed6.csv')
dfgenfavored1 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-favored1.csv')
dfgenfavored2 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-favored2.csv')
dfgenfavored3 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-favored3.csv')
dfgenfavored4 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-favored4.csv')
dfgenfavored5 = pd.read_csv('/content/drive/MyDrive/rama8000/generaL-favored5.csv')
# ===== create figures =====
# x and y are the column names
figgenaLLowed1 = px.line(dfgenaLLowed1, x="phi", y="psi",
hover_name="number"
)
# add line color
figgenaLLowed1.update_traces(line=dict(
color = 'deepskyblue',
width=1))
figgenaLLowed2 = px.line(dfgenaLLowed2, x="phi", y="psi",
hover_name="number"
)
figgenaLLowed2.update_traces(line=dict(
color = 'deepskyblue',
width=1))
figgenaLLowed3 = px.line(dfgenaLLowed3, x="phi", y="psi",
hover_name="number"
)
figgenaLLowed3.update_traces(line=dict(
color = 'deepskyblue',
width=1))
figgenaLLowed4 = px.line(dfgenaLLowed4, x="phi", y="psi",
hover_name="number"
)
figgenaLLowed4.update_traces(line=dict(
color = 'deepskyblue',
width=1))
figgenaLLowed5 = px.line(dfgenaLLowed5, x="phi", y="psi",
hover_name="number"
)
figgenaLLowed5.update_traces(line=dict(
color = 'deepskyblue',
width=1))
figgenaLLowed6 = px.line(dfgenaLLowed6, x="phi", y="psi",
hover_name="number")
figgenaLLowed6.update_traces(line=dict(
color = 'deepskyblue',
width=1))
figgenfavored1 = px.line(dfgenfavored1, x="phi", y="psi",
hover_name="number")
figgenfavored1.update_traces(line=dict(
color = 'deepskyblue',
width=2))
figgenfavored2 = px.line(dfgenfavored2, x="phi", y="psi",
hover_name="number")
figgenfavored2.update_traces(line=dict(
color = 'deepskyblue',
width=2))
figgenfavored3 = px.line(dfgenfavored3, x="phi", y="psi",
hover_name="number")
figgenfavored3.update_traces(line=dict(
color = 'deepskyblue',
width=2))
figgenfavored4 = px.line(dfgenfavored4, x="phi", y="psi",
hover_name="number")
figgenfavored4.update_traces(line=dict(
color = 'deepskyblue',
width=2))
figgenfavored5 = px.line(dfgenfavored5, x="phi", y="psi",
hover_name="number")
figgenfavored5.update_traces(line=dict(
color = 'deepskyblue',
width=2))
# figtestrama = px.scatter(testframe, x="phi", y="psi", animation_frame="time", animation_group="aa",
# hover_name="aa", range_x=[-180,180], range_y=[-180,180], color_discrete_sequence=['white'])
# ==========================================
# Create a multi-aa plot
# ==========================================
figtestrama2 = px.scatter(bigframe, x="phi", y="psi", animation_frame="time", animation_group="aa",
hover_name="aa", range_x=[-180,180], range_y=[-180,180], color_discrete_sequence=['white'])
# ==========================================
# Add the plot for the allowed regions
# ==========================================
figtestrama2.add_trace(figgenaLLowed1.data[0])
figtestrama2.add_trace(figgenaLLowed2.data[0])
figtestrama2.add_trace(figgenaLLowed3.data[0])
figtestrama2.add_trace(figgenaLLowed4.data[0])
figtestrama2.add_trace(figgenaLLowed5.data[0])
figtestrama2.add_trace(figgenaLLowed6.data[0])
figtestrama2.add_trace(figgenfavored1.data[0])
figtestrama2.add_trace(figgenfavored2.data[0])
figtestrama2.add_trace(figgenfavored3.data[0])
figtestrama2.add_trace(figgenfavored4.data[0])
figtestrama2.add_trace(figgenfavored5.data[0])
figtestrama2.update_layout(width=700,
height=700,
title_text="General",
# unicode for greek characters
xaxis=dict(title=u"\u03A6"),
yaxis=dict(title=u"\u03A8"),
plot_bgcolor="black"
)
figtestrama2.update_traces(showlegend=False)
# update the axes
figtestrama2.update_xaxes(showline=True,
zeroline=True,
showgrid=False,
zerolinewidth=1,
zerolinecolor='grey'
)
figtestrama2.update_yaxes(showline=True,
zeroline=True,
showgrid=False,
zerolinewidth=1,
zerolinecolor='grey'
)
# show the graph
figtestrama2.show()
# save it as html to share it on website
figtestrama2.write_html("rama-arg.html")
# + [markdown] id="qpwsQpOk3-M2"
# <mark>**Success!**</mark>
#
# Wow. This took a while. But the result is what I wanted. Each amino acid is in its own file, so I can add just the glycines to the glycine plot, etc. I could probably tweak a few more things, but I'll stop here for now. I will likely add more amino acids to this plot and set up separate plots for the glycines and the ILE-VALs.
#
# See [How to Save Plotly Animations: The Ultimate Guide](https://holypython.com/how-to-save-plotly-animations-the-ultimate-guide/) for how to save this animation on my MkDocs site.
#
# >In Github case you can simply start a public repository, upload your file to it, commit the file and then share its link in the `iframe`.
#
# ```html
# <iframe width="900" height="800" frameborder="0" scrollng="no" src="https://holypython.github.io/holypython2/covid_cases.html"></iframe>
# ```
# + [markdown] id="7ammonej1_RH"
# ### Building Another Ramachandran Plot
# + [markdown] id="dgqyqwoP1vJL"
# Let's try to add a few more amino acids to the General Ramachandran plot.
# + id="qZcLsFA54bmX" colab={"base_uri": "https://localhost:8080/"} outputId="67df94d1-7f46-4c37-b5bb-b64bdd3aeae8"
# move into the directory with the files
# %cd rama
# + [markdown] id="r8ejxJyX2KCd"
# #### Prepare the files
# + id="ubghn9eN2tDz"
import os
import pandas as pd
import glob
globbed_files = glob.glob("AR*.csv")
# print(globbed_files)
# this worked
# + id="-sKGfRqB7k1_"
# don't need this cell
# the cell below worked
list_of_dfs = [pd.read_csv(filename,
header=None,
sep='\s\s+',
engine='python')
for filename in globbed_files]
print(list_of_dfs)
# this worked
# + [markdown] id="uneJWy3BaN-J"
# The most useful information was from this question/answer on Stackoverflow: [Python Pandas add Filename Column CSV](https://stackoverflow.com/questions/41857659/python-pandas-add-filename-column-csv).
# + colab={"base_uri": "https://localhost:8080/"} id="hSBn1f139Jab" outputId="3b5e5bb0-8271-4b03-ba74-7dad75e45c25"
globbed_files = glob.glob("AR*.csv") #creates a list of all csv files
data = [] # pd.concat takes a list of dataframes as an agrument
for csv in globbed_files:
frame = pd.read_csv(csv, header=None, sep='\s\s+', engine='python')
frame.columns=["phi", "psi"]
frame['filename'] = os.path.basename(csv)
data.append(frame)
frame['time'] = (range(0, 1010, 10))
frame2 = frame['filename'].str.split('.', expand = True)
frame["aa"]= frame2[0]
del frame["filename"]
# print(frame)
# Add a time column to the DataFrame
# frame['time'] = (range(0, 1010, 10)) # range starting at 0 ending at 1000 with a stepsize of 10.)
# making a new dataframe from splitting the column
# frame2 = frame['filename'].str.split('.', expand = True)
# adding the separate amino acid column from the new data frame
# frame["aa"]= frame2[0]
# delete the filename column
# del frame["filename"]
# print(frame)
bigframe = pd.concat(data, ignore_index=True) # dont want pandas to try an align row indexes
print(bigframe)
# bigframe.to_csv("Pandas_output2.csv")
# print(bigframe)
# print(frame)
# This worked to make a large .csv file
# + [markdown] id="I7cvrprfGAoy"
# Add the `bigframe` to the animated time series chart, above.
|
gromacs_test.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Detecting social Distance
# Network
# Importing the libraries
import backbone
import tensorflow as tf
import cv2
import numpy as np
import cv2
import numpy as np
from scipy.spatial.distance import pdist, squareform
import cv2
import os
import argparse
from network_model import model
from aux_functions import *
# Make Network
# +
class model:
def __init__(self):
detection_graph, self.category_index = backbone.set_model(
"ssd_mobilenet_v1_coco_2018_01_28", "mscoco_label_map.pbtxt"
)
self.sess = tf.InteractiveSession(graph=detection_graph)
self.image_tensor = detection_graph.get_tensor_by_name("image_tensor:0")
self.detection_boxes = detection_graph.get_tensor_by_name("detection_boxes:0")
self.detection_scores = detection_graph.get_tensor_by_name("detection_scores:0")
self.detection_classes = detection_graph.get_tensor_by_name(
"detection_classes:0"
)
self.num_detections = detection_graph.get_tensor_by_name("num_detections:0")
def get_category_index(self):
return self.category_index
def detect_pedestrians(self, frame):
input_frame = frame
image_np_expanded = np.expand_dims(input_frame, axis=0)
(boxes, scores, classes, num) = self.sess.run(
[
self.detection_boxes,
self.detection_scores,
self.detection_classes,
self.num_detections,
],
feed_dict={self.image_tensor: image_np_expanded},
)
classes = np.squeeze(classes).astype(np.int32)
boxes = np.squeeze(boxes)
scores = np.squeeze(scores)
pedestrian_score_threshold = 0.35
pedestrian_boxes = []
total_pedestrians = 0
for i in range(int(num[0])):
if classes[i] in self.category_index.keys():
class_name = self.category_index[classes[i]]["name"]
# print(class_name)
if class_name == "person" and scores[i] > pedestrian_score_threshold:
total_pedestrians += 1
score_pedestrian = scores[i]
pedestrian_boxes.append(boxes[i])
return pedestrian_boxes, total_pedestrians
# -
# Make the function
# +
def plot_lines_between_nodes(warped_points, bird_image, d_thresh):
p = np.array(warped_points)
dist_condensed = pdist(p)
dist = squareform(dist_condensed)
dd = np.where(dist < d_thresh * 6 / 10)
close_p = []
color_10 = (96,160,48)
lineThickness = 4
ten_feet_violations = len(np.where(dist_condensed < 10 / 6 * d_thresh)[0])
for i in range(int(np.ceil(len(dd[0]) / 2))):
if dd[0][i] != dd[1][i]:
point1 = dd[0][i]
point2 = dd[1][i]
close_p.append([point1, point2])
cv2.line(
bird_image,
(p[point1][0], p[point1][1]),
(p[point2][0], p[point2][1]),
color_10,
lineThickness,
)
dd = np.where(dist < d_thresh)
six_feet_violations = len(np.where(dist_condensed < d_thresh)[0])
total_pairs = len(dist_condensed)
danger_p = []
color_6 = (96,160,48)
for i in range(int(np.ceil(len(dd[0]) / 2))):
if dd[0][i] != dd[1][i]:
point1 = dd[0][i]
point2 = dd[1][i]
danger_p.append([point1, point2])
cv2.line(
bird_image,
(p[point1][0], p[point1][1]),
(p[point2][0], p[point2][1]),
color_6,
lineThickness,
)
# Display Birdeye view
cv2.imshow("Bird Eye View", bird_image)
cv2.waitKey(1)
return six_feet_violations, ten_feet_violations, total_pairs
def plot_points_on_bird_eye_view(frame, pedestrian_boxes, M, scale_w, scale_h):
frame_h = frame.shape[0]
frame_w = frame.shape[1]
node_radius = 10
color_node = (96,160,48) #96,160,48
thickness_node = 20
solid_back_color = (96,160,48) #41, 41, 41
blank_image = np.zeros(
(int(frame_h * scale_h), int(frame_w * scale_w), 3), np.uint8
)
blank_image[:] = solid_back_color
warped_pts = []
for i in range(len(pedestrian_boxes)):
mid_point_x = int(
(pedestrian_boxes[i][1] * frame_w + pedestrian_boxes[i][3] * frame_w) / 2
)
mid_point_y = int(
(pedestrian_boxes[i][0] * frame_h + pedestrian_boxes[i][2] * frame_h) / 2
)
pts = np.array([[[mid_point_x, mid_point_y]]], dtype="float32")
warped_pt = cv2.perspectiveTransform(pts, M)[0][0]
warped_pt_scaled = [int(warped_pt[0] * scale_w), int(warped_pt[1] * scale_h)]
warped_pts.append(warped_pt_scaled)
bird_image = cv2.circle(
blank_image,
(warped_pt_scaled[0], warped_pt_scaled[1]),
node_radius,
color_node,
thickness_node,
)
return warped_pts, bird_image
# -
def get_camera_perspective(img, src_points):
IMAGE_H = img.shape[0]
IMAGE_W = img.shape[1]
src = np.float32(np.array(src_points))
dst = np.float32([[0, IMAGE_H], [IMAGE_W, IMAGE_H], [0, 0], [IMAGE_W, 0]])
M = cv2.getPerspectiveTransform(src, dst)
M_inv = cv2.getPerspectiveTransform(dst, src)
return M, M_inv
def put_text(frame, text, text_offset_y=25):
font_scale = 0.8
font = cv2.FONT_HERSHEY_SIMPLEX
rectangle_bgr = (35, 35, 35)
(text_width, text_height) = cv2.getTextSize(
text, font, fontScale=font_scale, thickness=1
)[0]
# set the text start position
text_offset_x = frame.shape[1] - 400
# make the coords of the box with a small padding of two pixels
box_coords = (
(text_offset_x, text_offset_y + 5),
(text_offset_x + text_width + 2, text_offset_y - text_height - 2),
)
frame = cv2.rectangle(
frame, box_coords[0], box_coords[1], rectangle_bgr, cv2.FILLED
)
frame = cv2.putText(
frame,
text,
(text_offset_x, text_offset_y),
font,
fontScale=font_scale,
color=(96,160,48), #255, 255, 255
thickness=1,
)
return frame, 2 * text_height + text_offset_y
def calculate_stay_at_home_index(total_pedestrians_detected, frame_num, fps):
normally_people = 10
pedestrian_per_sec = np.round(total_pedestrians_detected / frame_num, 1)
sh_index = 1 - pedestrian_per_sec / normally_people
return pedestrian_per_sec, sh_index
def plot_pedestrian_boxes_on_image(frame, pedestrian_boxes):
frame_h = frame.shape[0]
frame_w = frame.shape[1]
thickness = 2
# color_node = (80, 172, 110)
color_node = (96,160,48)
# color_10 = (160, 48, 112)
for i in range(len(pedestrian_boxes)):
pt1 = (
int(pedestrian_boxes[i][1] * frame_w),
int(pedestrian_boxes[i][0] * frame_h),
)
pt2 = (
int(pedestrian_boxes[i][3] * frame_w),
int(pedestrian_boxes[i][2] * frame_h),
)
frame_with_boxes = cv2.rectangle(frame, pt1, pt2, color_node, thickness)
return frame_with_boxes
|
Network and function -checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <font size="+5">#02 | Decision Tree. A Supervised Classification Model</font>
# - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)
# - Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄
# # Discipline to Search Solutions in Google
# > Apply the following steps when **looking for solutions in Google**:
# >
# > 1. **Necesity**: How to load an Excel in Python?
# > 2. **Search in Google**: by keywords
# > - `load excel python`
# > - ~~how to load excel in python~~
# > 3. **Solution**: What's the `function()` that loads an Excel in Python?
# > - A Function to Programming is what the Atom to Phisics.
# > - Every time you want to do something in programming
# > - **You will need a `function()`** to make it
# > - Theferore, you must **detect parenthesis `()`**
# > - Out of all the words that you see in a website
# > - Because they indicate the presence of a `function()`.
# # Load the Data
# > Load the Titanic dataset with the below commands
# > - This dataset **people** (rows) aboard the Titanic
# > - And their **sociological characteristics** (columns)
# > - The aim of this dataset is to predict the probability to `survive`
# > - Based on the social demographic characteristics.
# +
import seaborn as sns
df = sns.load_dataset(name='titanic').iloc[:, :4]
# -
df.head()
# # `DecisionTreeClassifier()` Model in Python
# ## Build the Model
# > 1. **Necesity**: Build Model
# > 2. **Google**: How do you search for the solution?
# > 3. **Solution**: Find the `function()` that makes it happen
# ## Code Thinking
#
# > Which function computes the Model?
# > - `fit()`
# >
# > How could can you **import the function in Python**?
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit()
# ### Separate Variables for the Model
#
# > Regarding their role:
# > 1. **Target Variable `y`**
# >
# > - [ ] What would you like **to predict**?
# >
# > 2. **Explanatory Variable `X`**
# >
# > - [ ] Which variable will you use **to explain** the target?
target = df.survived
df['pclass', 'sex']
df.keys()
explanatory = df[['pclass', 'sex', 'age']]
# ### Finally `fit()` the Model
model = DecisionTreeClassifier()
model.fit(X=explanatory, y=target)
float('2.34')
float('male')
import pandas as pd
read_csv
df
df = pd.get_dummies(data=df, drop_first=True)
df
explanatory = df.drop(columns='survived')
target = df.survived
model.fit(X=explanatory, y=target)
explanatory
df = df.dropna().reset_index(drop=True)
explanatory = df.drop(columns='survived')
target = df.survived
model = DecisionTreeClassifier()
model.__dict__
model.fit(X=explanatory, y=target)
model.__dict__
# ## Calculate a Prediction with the Model
# > - `model.predict_proba()`
df[:1]
# ## Model Visualization
# > - `tree.plot_tree()`
# ## Model Interpretation
# > Why `sex` is the most important column? What has to do with **EDA** (Exploratory Data Analysis)?
# +
# %%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VeUPuFGJHk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
# -
# # Prediction vs Reality
# > How good is our model?
# ## Precision
# > - `model.score()`
# ## Confusion Matrix
# > 1. **Sensitivity** (correct prediction on positive value, $y=1$)
# > 2. **Specificity** (correct prediction on negative value $y=0$).
# ## ROC Curve
# > A way to summarise all the metrics (score, sensitivity & specificity)
|
II Machine Learning & Deep Learning/03_Decision Tree. A Supervised Classification Model/03session_decision-tree.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/luanps/pyserini/blob/master/Run_pyserini_tct_colbert.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="4ucd78ufcviu"
#
# # TCT Colbert Passage Ranking on MSMARCO
#
# <NAME>, <NAME>, and <NAME>. Distilling Dense Representations for Ranking using Tightly-Coupled Teachers. arXiv:2010.11386, October 2020.
#
# Summary of results:
#
# | Condition | MRR@10 | MAP | Recall@1000 |
# |:----------|-------:|----:|------------:|
# | TCT-ColBERT (brute-force index) | 0.3350 | 0.3416 | 0.9640 |
# | TCT-ColBERT (HNSW index) | 0.3345 | 0.3410 | 0.9618 |
# | TCT-ColBERT (brute-force index) + BoW BM25 | 0.3529 | 0.3594 | 0.9698 |
# | TCT-ColBERT (brute-force index) + BM25 w/ doc2query-T5 | 0.3647 | 0.3711 | 0.9751 |
# + [markdown] id="_NfPlP4fFwuH"
# ## Install dependencies
# + id="U4QbbuVnLisS"
from google.colab import auth
auth.authenticate_user()
# + id="-ycV2g24xYhc"
# %%capture
# !pip install pyserini
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
# + colab={"base_uri": "https://localhost:8080/"} id="IIH-U0TG26By" outputId="a3b91fb7-4467-4e08-9871-69aa3950b0ad"
# !pip install faiss-cpu
# + [markdown] id="WRseg_mrg2Lt"
# ## DENSE RETRIEVAL
# + [markdown] id="o0SedQeVFsUY"
# ### Dense retrieval with TCT-ColBERT, brute-force index
#
# + colab={"base_uri": "https://localhost:8080/"} id="K_mAjElIvPRd" outputId="4d26df0c-c6ea-465f-b27e-ea14b349921d"
# !python -m pyserini.dsearch --topics msmarco-passage-dev-subset \
# --index msmarco-passage-tct_colbert-bf \
# --encoded-queries tct_colbert-msmarco-passage-dev-subset \
# --batch-size 36 \
# --threads 12 \
# --output run.msmarco-passage.tct_colbert.bf.tsv \
# --output-format msmarco
# + colab={"base_uri": "https://localhost:8080/"} id="k9OI5KlzvYhK" outputId="2651ac97-c7bc-4de8-b5ea-10bd03c2a1f5"
# !gsutil cp run.msmarco-passage.tct_colbert.bf.tsv gs://luanps/information_retrieval/pyserini/tct_colbert_bf/
# + [markdown] id="rCnE_TM0NgZ2"
# #### Evaluation
# + id="sgJ9Kc2UM3Lm"
#MRR Eval
# !python -m pyserini.eval.msmarco_passage_eval msmarco-passage-dev-subset run.msmarco-passage.tct_colbert.bf.tsv >> bf_mrr_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="SXCWom8RNzZY" outputId="279ce159-d982-4780-e6b4-b69b76a9c20c"
#TREC Eval
# !python -m pyserini.eval.convert_msmarco_run_to_trec_run --input run.msmarco-passage.tct_colbert.bf.tsv \
# --output run.msmarco-passage.tct_colbert.bf.trec
# !python -m pyserini.eval.trec_eval -c -mrecall.1000 \
# -mmap msmarco-passage-dev-subset \
# run.msmarco-passage.tct_colbert.bf.trec >> bf_trec_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="QxRQZK9OPkZU" outputId="d0603395-ddf9-44e8-83fc-245fce935d8a"
# !gsutil cp bf_trec_eval.txt bf_mrr_eval.txt gs://luanps/information_retrieval/pyserini/tct_colbert_bf/
# + [markdown] id="Xin5coIhgmEk"
#
# + [markdown] id="9UtPsIzJdiyh"
# ### Dense retrieval with TCT-ColBERT, Hybrid Dense-Sparse Retrieval (HNSW) index
#
# + colab={"base_uri": "https://localhost:8080/"} id="cCV3BzNSdl7q" outputId="196740d8-7680-4ecb-98dd-c67f5523a177"
# !python -m pyserini.dsearch --topics msmarco-passage-dev-subset \
# --index msmarco-passage-tct_colbert-hnsw \
# --output run.msmarco-passage.tct_colbert.hnsw.tsv \
# --output-format msmarco
# + colab={"base_uri": "https://localhost:8080/"} id="FL44CvBJeE7b" outputId="83403fea-6916-467c-b980-c32a2ea05cab"
# !gsutil cp run.msmarco-passage.tct_colbert.hnsw.tsv gs://luanps/information_retrieval/pyserini/tct_colbert_hnsw/
# + [markdown] id="qXSB-WEdfB1J"
# #### Evaluation
# + colab={"base_uri": "https://localhost:8080/"} id="1XBsTmV-e_0A" outputId="2dd9d0c0-0705-4cea-9840-50b79f048605"
#MRR Eval
# !python -m pyserini.eval.msmarco_passage_eval msmarco-passage-dev-subset run.msmarco-passage.tct_colbert.hnsw.tsv >> hnsw_mrr_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="7fusW5M2fR96" outputId="1a406648-4486-487b-b345-739bb1f3b2bf"
#TREC Eval
# !python -m pyserini.eval.convert_msmarco_run_to_trec_run --input run.msmarco-passage.tct_colbert.hnsw.tsv \
# --output run.msmarco-passage.tct_colbert.hnsw.trec
# !python -m pyserini.eval.trec_eval -c -mrecall.1000 \
# -mmap msmarco-passage-dev-subset \
# run.msmarco-passage.tct_colbert.hnsw.trec >> hnsw_trec_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="_wfge16IfqL9" outputId="5fdc4fdc-25e8-4af0-8c93-9c11263dcfab"
# !gsutil cp hnsw_trec_eval.txt hnsw_mrr_eval.txt gs://luanps/information_retrieval/pyserini/tct_colbert_hnsw/
# + [markdown] id="I4xOzzZkhIjS"
# ## HYBRID DENSE-SPARSE RETRIEVAL
# + [markdown] id="yW4XWJHZf8yx"
# ## Hybrid retrieval with dense-sparse representations (without document expansion):
#
# - dense retrieval with TCT-ColBERT, brute force index.
# - sparse retrieval with BM25 msmarco-passage (i.e., default bag-of-words) index.
#
# + outputId="6a8cab6e-8e8c-4163-9217-491324ddc745" colab={"base_uri": "https://localhost:8080/"} id="AXmcW19Kf8yx"
# !python -m pyserini.hsearch dense --index msmarco-passage-tct_colbert-bf \
# --encoded-queries tct_colbert-msmarco-passage-dev-subset \
# sparse --index msmarco-passage \
# fusion --alpha 0.12 \
# run --topics msmarco-passage-dev-subset \
# --output run.msmarco-passage.tct_colbert.bf.bm25.tsv \
# --batch-size 36 --threads 12 \
# --output-format msmarco
# + colab={"base_uri": "https://localhost:8080/"} id="4q2qrI8Sf8yy" outputId="c80980e0-94a8-458d-a147-ab060dab60ec"
# !gsutil cp run.msmarco-passage.tct_colbert.bf.bm25.tsv gs://luanps/information_retrieval/pyserini/tct_colbert_bf_bm25/
# + [markdown] id="95XcrVvHf8yy"
# ### Evaluation
# + id="d5wjMkwqf8yy"
#MRR Eval
# !python -m pyserini.eval.msmarco_passage_eval msmarco-passage-dev-subset run.msmarco-passage.tct_colbert.bf.bm25.tsv >> bf_bm25_mrr_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="6WXyR8tSf8yy" outputId="cd2425cd-8641-4027-b814-a6b6334cff3e"
#TREC Eval
# !python -m pyserini.eval.convert_msmarco_run_to_trec_run --input run.msmarco-passage.tct_colbert.bf.bm25.tsv \
# --output run.msmarco-passage.tct_colbert.bf.bm25.trec
# !python -m pyserini.eval.trec_eval -c -mrecall.1000 \
# -mmap msmarco-passage-dev-subset \
# run.msmarco-passage.tct_colbert.bf.bm25.trec >> bf_bm25_trec_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="20cBprvmf8yy" outputId="a8087aeb-0098-4e6d-8951-9d938409545e"
# !gsutil cp bf_bm25_trec_eval.txt bf_bm25_mrr_eval.txt gs://luanps/information_retrieval/pyserini/tct_colbert_bf_bm25/
# + [markdown] id="pva6yVNkie5E"
# ## Hybrid retrieval with dense-sparse representations (with document expansion):
#
# - dense retrieval with TCT-ColBERT, brute force index.
# - sparse retrieval with doc2query-T5 expanded index.
#
# + outputId="0d7fd7b1-dba1-4b60-cb9c-2e97718e5513" colab={"base_uri": "https://localhost:8080/"} id="8arCs1joie5F"
# !python -m pyserini.hsearch dense --index msmarco-passage-tct_colbert-bf \
# --encoded-queries tct_colbert-msmarco-passage-dev-subset \
# sparse --index msmarco-passage-expanded \
# fusion --alpha 0.22 \
# run --topics msmarco-passage-dev-subset \
# --output run.msmarco-passage.tct_colbert.bf.doc2queryT5.tsv \
# --batch-size 36 --threads 12 \
# --output-format msmarco
# + colab={"base_uri": "https://localhost:8080/"} id="jCK4OrjEie5G" outputId="1ebef40c-d730-4412-cd9a-2496dd3bcc98"
# !gsutil cp run.msmarco-passage.tct_colbert.bf.doc2queryT5.tsv gs://luanps/information_retrieval/pyserini/tct_colbert_bf_doc2queryT5/
# + [markdown] id="dZRX2JNyie5G"
# ### Evaluation
# + id="hdkS1UsRie5G"
#MRR Eval
# !python -m pyserini.eval.msmarco_passage_eval msmarco-passage-dev-subset run.msmarco-passage.tct_colbert.bf.doc2queryT5.tsv >> bf_doc2queryT5_mrr_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="VH8eXJqLie5G" outputId="ea52350c-8ff7-4f49-8aae-3cadd0688341"
#TREC Eval
# !python -m pyserini.eval.convert_msmarco_run_to_trec_run --input run.msmarco-passage.tct_colbert.bf.doc2queryT5.tsv \
# --output run.msmarco-passage.tct_colbert.bf.doc2queryT5.trec
# !python -m pyserini.eval.trec_eval -c -mrecall.1000 \
# -mmap msmarco-passage-dev-subset \
# run.msmarco-passage.tct_colbert.bf.doc2queryT5.trec >> bf_doc2queryT5_trec_eval.txt
# + colab={"base_uri": "https://localhost:8080/"} id="A-AgNK8uie5H" outputId="6526ede6-4bd4-4029-e004-349e82d7384d"
# !gsutil cp bf_doc2queryT5_trec_eval.txt bf_doc2queryT5_mrr_eval.txt gs://luanps/information_retrieval/pyserini/tct_colbert_bf_doc2queryT5/
|
Run_pyserini_tct_colbert.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Execute NetApp DataOps Toolkit for Kubernetes operations within Jupyter Notebook
#
# This notebook demonstrates the execution of NetApp DataOps Toolkit for Kubernetes operations from within a Jupyter Notebook
# ## Install NetApp DataOps Toolkit for Kubernetes (only run once)
#
# Note: This cell only needs to be run once. This is a one-time task
# %pip install --user netapp-dataops-k8s
# ## Import NetApp DataOps Toolkit for Kubernetes functions
from netapp_dataops.k8s import cloneJupyterLab, createJupyterLab, deleteJupyterLab, \
listJupyterLabs, createJupyterLabSnapshot, listJupyterLabSnapshots, restoreJupyterLabSnapshot, \
cloneVolume, createVolume, deleteVolume, listVolumes, createVolumeSnapshot, \
deleteVolumeSnapshot, listVolumeSnapshots, restoreVolumeSnapshot
# ## Execute NetApp DataOps Toolkit for Kubernetes operation
#
# The following example shows the execution of a "create volume snapshot" operation. Replace this with the operation that you wish to execute. Refer to the NetApp DataOps Toolkit [GitHub repository](https://github.com/NetApp/netapp-data-science-toolkit) for details on all of the operations that can be performed using the toolkit.
# Create a VolumeSnapshot for the volume attached to the
# PersistentVolumeClaim (PVC) named 'workspace-test' in namespace 'admin'.
createVolumeSnapshot(pvcName="workspace-test", namespace="admin", printOutput=True)
|
netapp_dataops_k8s/Examples/Kubeflow/Notebooks/NetApp-Example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TEXT MINING for BEGINNER
# - 본 자료는 텍스트 마이닝을 활용한 연구 및 강의를 위한 목적으로 제작되었습니다.
# - 본 자료를 강의 목적으로 활용하고자 하시는 경우 꼭 아래 메일주소로 연락주세요.
# - 본 자료의 허가되지 않은 배포를 금지합니다.
# - 저작권, 출판, 특허, 공동저자에 관련해서는 따로 문의 바랍니다.
# - **Contact : ADMIN(<EMAIL>)**
#
# ---
# ## DAY 06. 정적 페이지 수집하기: 실시간검색어, 영화댓글
# - Python을 활용해 단순한 웹페이지에서 데이터를 크롤링하는 방법에 대해 다룹니다.
#
# ---
# > **\*\*\* 주의사항 \*\*\***
# 본 자료에서 설명하는 웹크롤링하는 방법은 해당 기법에 대한 이해를 돕고자하는 교육의 목적으로 사용되었으며,
# 이를 활용한 대량의 무단 크롤링은 범죄에 해당할 수 있음을 알려드립니다.
# ### Assignment 1. 네이버 영화댓글을 대량으로 가져오기
# > - 본 자료를 응용하여 네이버 영화댓글을 부가정보와 함께 대량으로 가져오는 코드를 작성합니다.
# - 영화댓글 페이지의 100페이지 분량의 댓글을 수집합니다.
# - 영화댓글 텍스트와 함께, 작성일자와 평점을 같이 가져옵니다.
# - 영화댓글, 작성일자, 평점을 탭(\t) 단위로 구분하여 파일에 저장합니다.
#
# > HINT. 영화댓글 URL 맨 뒤의 숫자는 페이지 번호를 의미합니다. ("&page=1")
#
# ---
# +
from bs4 import BeautifulSoup
import urllib
PAGE_LIMIT = 100
URL = "https://movie.naver.com/movie/bi/mi/pointWriteFormList.nhn?code=132623&type=after&isActualPointWriteExecute=false&isMileageSubscriptionAlready=false&isMileageSubscriptionReject=false&page="
SAVE_FILE_PATH = "captin_marvel.txt"
f = open(SAVE_FILE_PATH, "w", encoding="utf-8")
for page in range(1, PAGE_LIMIT+1):
print(page, end="\r")
url = URL + str(page)
page = urllib.request.urlopen(url)
soup = BeautifulSoup(page, "html.parser")
comments = soup.findAll('div', {'class':'score_reple',})
comment = [tag.p for tag in comments]
for c in comment:
f.write(c.get_text() + "\n")
f.flush()
f.close()
# -
|
practice-note/text-mining-for-beginner-appendix2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import cv2
import matplotlib.pyplot as plt
# %pylab inline
inp = cv2.imread(r"C:\Users\Stefan\Desktop\Unbenannt.PNG")[:,2:,:]
inp.shape
src = np.array([[584,494],[642,494],[748,593],[478,593]],dtype=np.float32)
dst = (np.array([[0,0],[640,0],[640,480],[0,480]],dtype = np.float32) + np.array([160,120],dtype=np.float32)) * np.array([1,2],dtype=np.float32)
ret = cv2.getPerspectiveTransform(src,dst)
out = cv2.warpPerspective(inp, ret, (960,1040), cv2.INTER_LINEAR, cv2.BORDER_CONSTANT)
#out
plt.figure(figsize=(15,15))
plt.imshow(cv2.cvtColor(out,cv2.COLOR_BGR2RGB))
plt.show()
|
src/aadcUserPython/bird_view.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 这一教程我们来探索利用 xalpha 提供的工具来计算可转债定价
# +
import xalpha as xa
import pandas as pd
import numpy as np
xa.set_display("notebook")
# -
# 我们的对比基准是富投网提供的可转债数据, 其具有详细的可转债定价数据
df = xa.misc.get_ri_status()
df
# xalpha 提供的转债价格计算器债券价值部分估算是非常好的,期权部分只采用了简单的 BS 公式估算 call 价值,其他部分都省略,让我们看下这样的前提下的估值效果和富投网以及市场情况差多少。
# 首先我们看下 xalpha 的可转债计算器怎么用
c = xa.CBCalculator("SH113575")
c.analyse()
# 没错用起来就是这么简单,所有的无风险利率,波动率,不同信用等级和久期对应的债券利率等细节都自动处理好了。
# 你还可以了解历史任意一天的可转债价值情况。不过值得提醒的是,暂时引擎未处理可转债转股价调整信息,总是利用现在的转股价估值,可能会带入未来数据。
c.analyse("2020-05-01")
df["bvalue"] = 0
df["ovalue"] = 0
df["tvalue"] = 0
df["votality"] = 0
df["rate"] = 0
for i, r in df.iterrows():
code = xa.universal.ttjjcode(r["转债代码"])
c = xa.CBCalculator(code)
d = c.analyse()
df.loc[i, "bvalue"] = d["bond_value"]
df.loc[i, "ovalue"] = d["option_value"]
df.loc[i, "tvalue"] = d["tot_value"]
df.loc[i, "votality"] = d["predicted_volatility"]
df.loc[i, "rate"] = d["bondrate"]
df["ridiff"] = pd.to_numeric(df["内在价值"]) - pd.to_numeric(df["转债价格"])
np.std(sorted(df["ridiff"])[10:-10]), np.mean(sorted(df["ridiff"])[10:-10])
# 富投估值和市价比较
df["ridiff"] = pd.to_numeric(df["tvalue"]) - pd.to_numeric(df["转债价格"])
np.std(sorted(df["ridiff"])[10:-10]), np.mean(sorted(df["ridiff"])[10:-10])
# xa估值和市价比较, 实践证明简单的期权估值一样能得到不错的效果,整体估值略微偏低,这是因为没考虑其他几个期权要素的贡献
df["ridiff"] = pd.to_numeric(df["tvalue"]) - pd.to_numeric(df["内在价值"])
np.std(sorted(df["ridiff"])[10:-10]), np.mean(sorted(df["ridiff"])[10:-10])
# xa 估值和富投估值的比较, 平均水平下估值几乎一致,xa 的估值系统完全可用
# 总体表格供参考,英文项目来自 xa 估值,中文项目来自富投
df
|
doc/samples/cbond.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from gensim.models import Word2Vec
import numpy as np
import pandas as pd
import os
import sys
import re
sys.path.insert(0,'/workspace/lang-detect/')
from src import utils
# -
data_path = "/workspace/lang-detect/txt/"
dir_list = os.listdir(data_path)
print(dir_list)
def read_data(filepath):
with open(filepath, "r") as f:
lines = f.readlines()
if len(lines) > 1:
return lines[1].strip("\n")
return None
# %%time
data, labels = [], []
for dir_name in dir_list:
files_list = os.listdir(data_path + dir_name)
for f in files_list:
sent = read_data(data_path + dir_name + "/" + f)
if sent:
data.append(sent)
labels.append(dir_name)
print("Length of data", len(data))
# %%time
for i in range(len(data)):
data[i] = utils.preprocess(data[i])
len(data)
sentences = []
for i in range(len(data)):
x = utils.create_n_gram(data[i], 4)
sentences.append(x.split())
# +
# Create word_to_index and index_to_word mapping
word_to_index, index_to_word = {}, {}
index = 1
for sent in sentences:
for token in sent:
if token not in word_to_index:
word_to_index[token] = index
index_to_word[index] = token
index += 1
print("Vocabulary size", index-1)
# -
# Change words to their index
train_x = []
for sent in sentences:
x = []
for token in sent:
x.append(word_to_index[token])
train_x.append(x)
# Check distribution of length of sentences
lens = {}
for sent in train_x:
if len(sent) in lens:
lens[len(sent)] += 1
else:
lens[len(sent)] = 1
import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure(figsize=(18,9))
plt.bar(lens.keys(), lens.values())
plt.xticks(np.arange(min(lens.keys()), max(lens.keys())+1, 50))
plt.show()
# check number of sentences with length more than 100, 200, 300
n_100, n_200, n_300 = 0, 0, 0
for k,v in lens.items():
if k >= 300:
n_300 += v
if k >= 200:
n_200 += v
if k >= 100:
n_100 += v
print(n_100, n_200, n_300)
# check number of sentences with length more than 500
n_500 = 0
for k,v in lens.items():
if k >= 250:
n_500 += v
print(n_500)
# **There are some very long sentences but I don't want to loose information. What I am going to do is, truncate the sentences with length more than 250 and create a new sentence from index 251 to 500 and so on. Since I have included 4-gram tokens only in this data, some noise might get added at the start and end of new sentences, but the model should not get affected by that.**
print(len(train_x), len(labels))
# +
train_x_trunc, train_y_trunc = [], []
max_len = 250
for ind, sent in enumerate(train_x):
while(len(sent) > max_len):
train_x_trunc.append(sent[:max_len])
train_y_trunc.append(labels[ind])
sent = sent[max_len:]
train_x_trunc.append(sent)
train_y_trunc.append(labels[ind])
print(len(train_x_trunc), len(train_y_trunc))
# -
classes = {}
for ind, c in enumerate(list(set(train_y_trunc))):
classes[c] = ind
print(classes)
for ind in range(len(train_y_trunc)):
train_y_trunc[ind] = classes[train_y_trunc[ind]]
print(set(train_y_trunc))
# Check distribution of length of sentences
lens = {}
for sent in train_x_trunc:
if len(sent) in lens:
lens[len(sent)] += 1
else:
lens[len(sent)] = 1
buckets = [10*x for x in range(1,26)]
buckets_data_sum = {}
for k, v in lens.items():
for x in buckets:
if k <= x:
if x in buckets_data_sum:
buckets_data_sum[x] += v
else:
buckets_data_sum[x] = v
break
buckets_data_sum
sum(buckets_data_sum.values())
# +
from collections import defaultdict
# Create batch data
batch_data = defaultdict(list)
batch_label = defaultdict(list)
PAD_ID = 0
for ind, sent in enumerate(train_x_trunc):
for x in buckets:
if len(sent) <= x:
sent += [PAD_ID]*(x - len(sent))
batch_data[x].append(sent)
batch_label[x].append(train_y_trunc[ind])
break
# -
batch_data[10][:10]
# +
import pickle
with open('../data/batch_data.npy', 'w') as f:
pickle.dump(batch_data, f)
with open('../data/batch_label.npy', 'w') as f:
pickle.dump(batch_label, f)
# -
# Create embedding matrix
wordvec = Word2Vec.load("../word-vectors/word2vec")
wordvec.wv.most_similar("the")
wordvec.wv.get_vector("the")
# Tokens are indexed starting from 1. '0' is for the padding. Vector for 0 will be zero-vector.
vocab_size, embed_size = wordvec.wv.vectors.shape
sorted_word_to_index = sorted(zip(word_to_index.keys(), word_to_index.values()), key=lambda x: x[1])
sorted_word_to_index[0]
# +
embedding_matrix = []
embedding_matrix.append([0.]*embed_size)
for key, index in sorted_word_to_index:
embedding_matrix.append(wordvec.wv.get_vector(key))
# -
print(len(embedding_matrix), len(embedding_matrix[0]))
# +
embedding_matrix = np.array(embedding_matrix)
import pickle
with open('../data/embedding_matrix.npy', 'w') as f:
pickle.dump(embedding_matrix, f)
# -
with open('../data/embedding_matrix.npy', 'r') as f:
ld = pickle.load(f)
ld.shape
|
src/ml/.ipynb_checkpoints/seq-model-processing-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="KB3KAlcFUTai"
# # Capítol 3 - Algorismes i Nombres
# + [markdown] id="aSN4GzA2UTam"
# ### 3.2 El Màxim Comú Divisor i les seves aplicacions
# + id="eU2pBV6yUTam"
def reduir_fraccio(numerador, denominador):
"""
Funció que retorna l'expressio irreductible d'una fracció
Parameters
----------
numerador: int
denominador: int
Returns
-------
numReduit: int
denReduit: int
"""
return (numReduit,denReduit)
# + id="9J9Oq82_UTan"
assert reduir_fraccio(12, 8) == (3,2)
|
Algorismica/3.2 (1).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc="true"
# # Table of Contents
# <p><div class="lev1 toc-item"><a href="#Load-libraries" data-toc-modified-id="Load-libraries-1"><span class="toc-item-num">1 </span>Load libraries</a></div><div class="lev1 toc-item"><a href="#Define-loss-functions" data-toc-modified-id="Define-loss-functions-2"><span class="toc-item-num">2 </span>Define loss functions</a></div><div class="lev1 toc-item"><a href="#Define-models" data-toc-modified-id="Define-models-3"><span class="toc-item-num">3 </span>Define models</a></div><div class="lev1 toc-item"><a href="#Modeling" data-toc-modified-id="Modeling-4"><span class="toc-item-num">4 </span>Modeling</a></div><div class="lev1 toc-item"><a href="#Predictions" data-toc-modified-id="Predictions-5"><span class="toc-item-num">5 </span>Predictions</a></div>
# -
# Modified from https://github.com/petrosgk/Kaggle-Carvana-Image-Masking-Challenge
# # Load libraries
# +
import cv2
import numpy as np
import pandas as pd
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint, TensorBoard
from keras.models import Model
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Activation, UpSampling2D, BatchNormalization
from keras.optimizers import RMSprop
from keras.losses import binary_crossentropy
import keras.backend as K
from sklearn.model_selection import train_test_split
# -
# # Define loss functions
# +
def dice_loss(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def bce_dice_loss(y_true, y_pred):
return binary_crossentropy(y_true, y_pred) + (1 - dice_loss(y_true, y_pred))
# -
# # Define models
def unet_down_one_block(inputs, num_filters):
x = Conv2D(num_filters, (3, 3), padding='same')(inputs)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(num_filters, (3, 3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def unet_max_pool(inputs):
x = MaxPooling2D((2, 2), strides=(2, 2))(inputs)
return x
def unet_up_one_block(up_input, down_input, num_filters):
x = UpSampling2D((2,2))(up_input)
x = concatenate([down_input, x], axis=3)
x = Conv2D(num_filters, (3,3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(num_filters, (3,3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(num_filters, (3,3), padding='same')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def get_unet(input_shape = (256, 256, 3),
num_classes = 1,
initial_filters = 32,
central_filters = 1024):
num_filters = initial_filters
out_list = [Input(shape=input_shape)]
down_interim_list = []
while num_filters <= central_filters/2:
x = unet_down_one_block(out_list[-1], num_filters)
down_interim_list.append(x)
num_filters = num_filters * 2
y = unet_max_pool(x)
out_list.append(y)
x = unet_down_one_block(out_list[-1], num_filters)
out_list.append(x)
num_filters = int(num_filters / 2)
while num_filters >= initial_filters:
x = unet_up_one_block(out_list[-1], down_interim_list.pop(), num_filters)
out_list.append(x)
num_filters = int(num_filters / 2)
classify = Conv2D(num_classes, (1,1), activation = 'sigmoid')(out_list[-1])
model = Model(inputs=out_list[0], outputs=classify)
model.compile(optimizer=RMSprop(lr=0.0001),
loss=bce_dice_loss,
metrics=[dice_loss])
return model
model = get_unet()
model.summary()
from keras.utils import plot_model
plot_model(model, to_file='model-256.png', show_shapes=True, show_layer_names=True)
from IPython.display import FileLink
FileLink('model-256.png')
# # Modeling
input_size = 256
max_epochs = 50
batch_size = 16
orig_width = 1918
orig_height= 1280
threshold = 0.5
df_train = pd.read_csv('data/train_masks.csv')
df_train.head()
ids_train = df_train['img'].map(lambda s: s.split('.')[0])
ids_train_split, ids_valid_split = train_test_split(ids_train, test_size=0.2, random_state=42)
def randomHueSaturationValue(image, hue_shift_limit=(-180, 180),
sat_shift_limit=(-255, 255),
val_shift_limit=(-255, 255), u=0.5):
if np.random.random() < u:
image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(image)
hue_shift = np.random.uniform(hue_shift_limit[0], hue_shift_limit[1])
h = cv2.add(h, hue_shift)
sat_shift = np.random.uniform(sat_shift_limit[0], sat_shift_limit[1])
s = cv2.add(s, sat_shift)
val_shift = np.random.uniform(val_shift_limit[0], val_shift_limit[1])
v = cv2.add(v, val_shift)
image = cv2.merge((h, s, v))
image = cv2.cvtColor(image, cv2.COLOR_HSV2BGR)
return image
def randomShiftScaleRotate(image, mask,
shift_limit=(-0.0625, 0.0625),
scale_limit=(-0.1, 0.1),
rotate_limit=(-45, 45), aspect_limit=(0, 0),
borderMode=cv2.BORDER_CONSTANT, u=0.5):
if np.random.random() < u:
height, width, channel = image.shape
angle = np.random.uniform(rotate_limit[0], rotate_limit[1]) # degree
scale = np.random.uniform(1 + scale_limit[0], 1 + scale_limit[1])
aspect = np.random.uniform(1 + aspect_limit[0], 1 + aspect_limit[1])
sx = scale * aspect / (aspect ** 0.5)
sy = scale / (aspect ** 0.5)
dx = round(np.random.uniform(shift_limit[0], shift_limit[1]) * width)
dy = round(np.random.uniform(shift_limit[0], shift_limit[1]) * height)
cc = np.math.cos(angle / 180 * np.math.pi) * sx
ss = np.math.sin(angle / 180 * np.math.pi) * sy
rotate_matrix = np.array([[cc, -ss], [ss, cc]])
box0 = np.array([[0, 0], [width, 0], [width, height], [0, height], ])
box1 = box0 - np.array([width / 2, height / 2])
box1 = np.dot(box1, rotate_matrix.T) + np.array([width / 2 + dx, height / 2 + dy])
box0 = box0.astype(np.float32)
box1 = box1.astype(np.float32)
mat = cv2.getPerspectiveTransform(box0, box1)
image = cv2.warpPerspective(image, mat, (width, height), flags=cv2.INTER_LINEAR, borderMode=borderMode,
borderValue=(
0, 0,
0,))
mask = cv2.warpPerspective(mask, mat, (width, height), flags=cv2.INTER_LINEAR, borderMode=borderMode,
borderValue=(
0, 0,
0,))
return image, mask
def randomHorizontalFlip(image, mask, u=0.5):
if np.random.random() < u:
image = cv2.flip(image, 1)
mask = cv2.flip(mask, 1)
return image, mask
import random
def train_generator():
while True:
this_ids_train_split = random.sample(list(ids_train_split), len(ids_train_split))
for start in range(0, len(ids_train_split), batch_size):
x_batch = []
y_batch = []
end = min(start + batch_size, len(ids_train_split))
ids_train_batch = this_ids_train_split[start:end]
for id in ids_train_batch:
img = cv2.imread('data/train/{}.jpg'.format(id))
img = cv2.resize(img, (input_size, input_size))
mask = cv2.imread('data/train_masks/{}_mask.png'.format(id), cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (input_size, input_size))
img = randomHueSaturationValue(img,
hue_shift_limit=(-50, 50),
sat_shift_limit=(-5, 5),
val_shift_limit=(-15, 15))
img, mask = randomShiftScaleRotate(img, mask,
shift_limit=(-0.0625, 0.0625),
scale_limit=(-0.1, 0.1),
rotate_limit=(-0, 0))
img, mask = randomHorizontalFlip(img, mask)
mask = np.expand_dims(mask, axis=2)
x_batch.append(img)
y_batch.append(mask)
x_batch = np.array(x_batch, np.float32) / 255
y_batch = np.array(y_batch, np.float32) / 255
yield x_batch, y_batch
# +
def valid_generator():
while True:
for start in range(0, len(ids_valid_split), batch_size):
x_batch = []
y_batch = []
end = min(start + batch_size, len(ids_valid_split))
ids_valid_batch = ids_valid_split[start:end]
for id in ids_valid_batch.values:
img = cv2.imread('data/train/{}.jpg'.format(id))
img = cv2.resize(img, (input_size, input_size))
mask = cv2.imread('data/train_masks/{}_mask.png'.format(id), cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (input_size, input_size))
mask = np.expand_dims(mask, axis=2)
x_batch.append(img)
y_batch.append(mask)
x_batch = np.array(x_batch, np.float32) / 255
y_batch = np.array(y_batch, np.float32) / 255
yield x_batch, y_batch
# +
callbacks = [EarlyStopping(monitor='val_dice_loss',
patience=8,
verbose=1,
min_delta=1e-4,
mode='max'),
ReduceLROnPlateau(monitor='val_dice_loss',
factor=0.1,
patience=4,
verbose=1,
epsilon=1e-4,
mode='max'),
ModelCheckpoint(monitor='val_dice_loss',
filepath='weights/best_weights_256.hdf5',
save_best_only=True,
save_weights_only=True,
mode='max'),
TensorBoard(log_dir='logs')]
history = model.fit_generator(generator=train_generator(),
steps_per_epoch=np.ceil(float(len(ids_train_split)) / float(batch_size)),
epochs=max_epochs,
verbose=2,
callbacks=callbacks,
validation_data=valid_generator(),
validation_steps=np.ceil(float(len(ids_valid_split)) / float(batch_size)))
# -
history.history
model.load_weights('./weights/best_weights_256.hdf5')
model.compile(optimizer=RMSprop(lr=0.00001),
loss=bce_dice_loss,
metrics=[dice_loss])
# +
callbacks = [EarlyStopping(monitor='val_dice_loss',
patience=8,
verbose=1,
min_delta=1e-4,
mode='max'),
ReduceLROnPlateau(monitor='val_dice_loss',
factor=0.1,
patience=4,
verbose=1,
epsilon=1e-4,
mode='max'),
ModelCheckpoint(monitor='val_dice_loss',
filepath='weights/best_weights_256_2.hdf5',
save_best_only=True,
save_weights_only=True,
mode='max')]
history = model.fit_generator(generator=train_generator(),
steps_per_epoch=np.ceil(float(len(ids_train_split)) / float(batch_size)),
epochs=max_epochs,
verbose=2,
callbacks=callbacks,
validation_data=valid_generator(),
validation_steps=np.ceil(float(len(ids_valid_split)) / float(batch_size)))
# -
history.history
# # Predictions
from tqdm import tqdm
df_test = pd.read_csv('data/sample_submission.csv')
ids_test = df_test['img'].map(lambda s: s.split('.')[0])
names = []
for id in ids_test:
names.append('{}.jpg'.format(id))
# +
# https://www.kaggle.com/stainsby/fast-tested-rle
def run_length_encode(mask):
'''
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
inds = mask.flatten()
runs = np.where(inds[1:] != inds[:-1])[0] + 2
runs[1::2] = runs[1::2] - runs[:-1:2]
rle = ' '.join([str(r) for r in runs])
return rle
rles = []
model.load_weights(filepath='weights/best_weights_256_2.hdf5')
# -
print('Predicting on {} samples with batch_size = {}...'.format(len(ids_test), batch_size))
for start in tqdm(range(0, len(ids_test), batch_size)):
x_batch = []
end = min(start + batch_size, len(ids_test))
ids_test_batch = ids_test[start:end]
for id in ids_test_batch.values:
img = cv2.imread('data/test/{}.jpg'.format(id))
img = cv2.resize(img, (input_size, input_size))
x_batch.append(img)
x_batch = np.array(x_batch, np.float32) / 255
preds = model.predict_on_batch(x_batch)
preds = np.squeeze(preds, axis=3)
for pred in preds:
prob = cv2.resize(pred, (orig_width, orig_height))
mask = prob > threshold
rle = run_length_encode(mask)
rles.append(rle)
print("Generating submission file...")
df = pd.DataFrame({'img': names, 'rle_mask': rles})
df.to_csv('submit/submission7.csv.gz', index=False, compression='gzip')
|
02-unet-256x256.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# %matplotlib inline
from scipy import stats
# ## Linear Combinations ##
# Let $\mathbf{X}$ be multivariate normal with mean vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}$. Definition 3 says that all linear combinations of elements of $\mathbf{X}$ are normal too. This makes many calculations straightforward. Here is an example in two dimensions.
# ### Sum and Difference ###
# Let $\mathbf{X} = [X_1 ~ X_2]^T$ have bivariate normal distribution mean vector $\boldsymbol{\mu} = [\mu_1 ~ \mu_2]^T$ and covariance matrix $\boldsymbol{\Sigma}$.
#
# Then the sum $S = X_1 + X_2$ has the normal distribution with mean $\mu_1 + \mu_2$ and variance
#
# $$
# Var(S) ~ = ~ Var(X_1) + Var(X_2) + 2Cov(X_1, X_2)
# $$
#
# which you can calculate based on $\boldsymbol{\Sigma}$.
#
# The difference $D= X_1 - X_2$ has the normal distribution with mean $\mu_1 - \mu_2$ and variance
#
# $$
# Var(D) ~ = ~ Var(X_1) + Var(X_2) - 2Cov(X_1, X_2)
# $$
#
# No matter what the linear combination of elements of $\mathbf{X}$, its distribution is normal. To identify the parameters of the distribution, work out the mean and variance using properties of means and variances and then find the necessary components from the mean vector and covariance matrix of $\mathbf{X}$. Once you have the mean and variance, you are all set to find probabilities by using the normal curve as usual.
# ### Joint Distribution of Linear Combinations ###
# Definition 2 implies that the joint distribution of a finite number of linear combinations of $\mathbf{X}$ is multivariate normal. In the example above, not only does each of $S$ and $D$ have a normal distribution, the joint distribution of $S$ and $D$ is bivariate normal. We found the mean vector and all but one element of the covariance matrix in the calculations above. The remaining element is
#
# $$
# Cov(S, D) ~ = ~ Cov(X_1 + X_2, X_1 - X_2) ~ = ~ Var(X_1) - Var(X_2)
# $$
# by bilinearity and symmetry of covariance.
# ### Marginals ###
# Each $X_i$ is a linear combination of elements of $\mathbf{X}$: the combination that has coefficient 1 at index $i$ and 0 everywhere else. So each $X_i$ has the normal distribution. The parameters of this normal distribution can be read off the mean vector and covariance matrix: $E(X_i) = \boldsymbol{\mu}(i)$ and $Var(X_i) = \boldsymbol{\Sigma}(i, i)$.
#
# But be warned: **the converse is not true**. If all the marginals of a random vector are normal, the joint distribution need not be multivariate normal.
# ### A Cautionary Tale ###
# The cells below show the empirical joint and marginal distributions of an interesting data set. Read the comment at the top of each cell to see what is being computed and displayed.
# +
# Generate 100,000 iid standard normal points
x = stats.norm.rvs(size=100000)
y = stats.norm.rvs(size=100000)
t = Table().with_column(
'X', x,
'Y', y
)
# +
# Select just those where both elements have the same sign
new = t.where(t.column(0) * t.column(1) > 0)
# +
# The restricted pairs are not jointly normal;
# that shape isn't an ellipse
new.scatter(0, 1)
# +
# Empirical distribution of horizontal coordinate
new.hist(0, bins=25, ec='w')
plt.xticks(np.arange(-5, 6));
# +
# Empirical distribution of vertical coordinate
new.hist(1, bins=25, ec='w')
plt.xticks(np.arange(-5, 6));
# -
# Both marginals are normal but the joint distribution is far from bivariate normal.
#
# To get the formula for the joint density of these variables, start with the circularly symmetric joint density of two i.i.d. standard normals and restrict it to Quadrants 1 and 3. This leaves out half of the volume under the original surface, so remember to multiply by 2 to make the total volume under the new surface equal to 1.
# +
def new_density(x,y):
if x*y > 0:
return 1/np.pi * np.exp(-0.5*(x**2 + y**2))
else:
return 0
Plot_3d((-4, 4), (-4, 4), new_density, rstride=4, cstride=5)
# -
|
notebooks/Chapter_23/03_Linear_Combinations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# ## подготовка:
# + deletable=true editable=true
import numpy as np
from numpy.linalg import *
rg = matrix_rank
from IPython.display import display, Math, Latex, Markdown
from sympy import *
pr = lambda s: display(Markdown('$'+str(latex(s))+'$'))
def pmatrix(a, intro='',ending='',row=False):
if len(a.shape) > 2:
raise ValueError('pmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{pmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{pmatrix}']
if row:
return(intro+'\n'.join(rv)+ending)
else:
display(Latex('$$'+intro+'\n'.join(rv)+ending+'$$'))
# + [markdown] deletable=true editable=true
# # Задача 7
# ## 1) доно:
# + deletable=true editable=true
C = np.array([[1,2],
[0,1]])
pmatrix(C, intro=r'C_{2\times 2}=')
D = np.array([[3,1],
[1,0]])
pmatrix(D, intro=r'D_{2\times 2}=')
B = np.array([[5,1],
[5,2]])
pmatrix(B, intro=r'B_{2\times 2}=')
# + deletable=true editable=true
A = np.array([[5,6],
[3,4]])
pmatrix(rg(A), intro=r'rg(A)=')
# + deletable=true editable=true
pmatrix(inv(C), intro=r'C^{-1}=')
pmatrix(B.T, intro=r'C^{T}=')
pmatrix(B.dot(C), intro=r'BC=')
9
pmatrix(rg(B), intro=r'rg(C)=')
pmatrix(det(B), intro=r'det(C)=')
# + deletable=true editable=true
A = np.array([[2,6],
[1,3]])
#pmatrix(rg(B), intro=r'rg(B)=')
pmatrix(rg(A), intro=r'rg(A)=')
#pmatrix(rg(A.dot(B)), intro=r'rg(AB)=')
# + [markdown] deletable=true editable=true
# # 3 пункт
# + [markdown] deletable=true editable=true
# ## примерчик
# + deletable=true editable=true
a1 = Symbol('a_{12}')
b1 = Symbol('b_{11}')
c1 = Symbol('c_{22}')
d1 = Symbol('d_{21}')
X =np.array([[a1,b1],
[c1,d1]])
B = np.array([[5,1],
[5,2]])
C1 = np.array([[1,1],
[1,2]])
D1 = np.array([[2,1],
[1,0]])
C2 = np.array([[1,-1],
[0,1]])
D2 = np.array([[1,1],
[0,1]])
pmatrix(B.reshape((4, 1)), intro="X=")
# + deletable=true editable=true
pmatrix( (C1.dot(X)).dot(D1))
A = (C1.dot(X)).dot(D1) + (C2.dot(X)).dot(D2)
pmatrix(A)
F = np.array([[3,1,1,1],
[2,1,0,-1],
[2,1,5,2],
[1,0,3,1]])
pmatrix(F, ending=pmatrix(X.reshape((4, 1)),row=True)+"="+pmatrix(B.reshape((4, 1)),row=True))
pmatrix(rg(F), intro=r'rg(F)=')
print("Зничит есть нормальное решение!)")
# + [markdown] deletable=true editable=true
# # Решаем этоЁ!!!
#
# + deletable=true editable=true
from sympy import Matrix, solve_linear_system
from sympy.abc import a,b,c,d
# + [markdown] deletable=true editable=true
# Examlpe:
# x + 4 y == 2
# -2 x + y == 14
#
# >from sympy import Matrix, solve_linear_system
# >from sympy.abc import x, y
#
# >system = Matrix(( (1, 4, 2), (-2, 1, 14)))
# >solve_linear_system(system, x, y)
# + deletable=true editable=true
system = Matrix(( (3,1,1,1,5), (2,1,0,-1,1), (2,1,5,2,5),(1,0,3,1,2) ))
x = solve_linear_system(system, a,b,c,d)
X =np.array([[x[a],x[b]],[x[c],x[d]] ])
# + deletable=true editable=true
pmatrix(X,intro="X=")
# + deletable=true editable=true
# + deletable=true editable=true
x = Symbol('x')
y = Symbol('y')
pr(integrate(sqrt(4*x-x**2), x))
# + deletable=true editable=true
|
7ex/math.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="kBQzOeY8Iaq3"
# # Bayesian Switchpoint Analysis
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Switchpoint_Analysis.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Bayesian_Switchpoint_Analysis.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="5XxXGBbRgsq2"
# This notebook reimplements and extends the Bayesian “Change point analysis” example from the [pymc3 documentation](https://docs.pymc.io/notebooks/getting_started.html#Case-study-2:-Coal-mining-disasters).
# + [markdown] colab_type="text" id="_mpkdys-KrTT"
# ## Prerequisites
# + colab={} colab_type="code" id="t_Uo-kwnGqZi"
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import edward2 as ed
tfd = tfp.distributions
tfb = tfp.bijectors
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15,8)
# %config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
# + [markdown] colab_type="text" id="53lnVbvHKtH9"
# ## Dataset
# + [markdown] colab_type="text" id="829rNEHfKyEq"
# The dataset is from [here](https://pymc-devs.github.io/pymc/tutorial.html#two-types-of-variables). Note, there is another version of this example [floating around](https://docs.pymc.io/notebooks/getting_started.html#Case-study-2:-Coal-mining-disasters), but it has “missing” data – in which case you’d need to impute missing values. (Otherwise your model will not ever leave its initial parameters because the likelihood function will be undefined.)
# + colab={"height": 529} colab_type="code" executionInfo={"elapsed": 1281, "status": "ok", "timestamp": 1546900515061, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 480} id="XGMvb9_DObuU" outputId="4f9342df-3e69-47b9-88db-ea1fee1c59ad"
disaster_data = np.array([ 4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, 1, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1])
years = np.arange(1851, 1962)
plt.plot(years, disaster_data, 'o', markersize=8);
plt.ylabel('Disaster count')
plt.xlabel('Year')
plt.title('Mining disaster data set')
plt.show()
# + [markdown] colab_type="text" id="puvRMZnQLRmD"
# ## Probabilistic Model
# + [markdown] colab_type="text" id="2UA6HkV0gXvb"
# The model assumes a “switch point” (e.g. a year during which safety regulations changed), and Poisson-distributed disaster rate with constant (but potentially different) rates before and after that switch point.
#
# The actual disaster count is fixed (observed); any sample of this model will need to specify both the switchpoint and the “early” and “late” rate of disasters.
#
# Original model from [pymc3 documentation example](https://pymc-devs.github.io/pymc/tutorial.html):
#
# $$
# \begin{align*}
# (D_t|s,e,l)&\sim \text{Poisson}(r_t), \\
# & \,\quad\text{with}\; r_t = \begin{cases}e & \text{if}\; t < s\\l &\text{if}\; t \ge s\end{cases} \\
# s&\sim\text{Discrete Uniform}(t_l,\,t_h) \\
# e&\sim\text{Exponential}(r_e)\\
# l&\sim\text{Exponential}(r_l)
# \end{align*}
# $$
#
# However, the mean disaster rate $r_t$ has a discontinuity at the switchpoint $s$, which makes it not differentiable. Thus it provides no gradient signal to the Hamiltonian Monte Carlo (HMC) algorithm – but because the $s$ prior is continuous, HMC’s fallback to a random walk is good enough to find the areas of high probability mass in this example.
#
# As a second model, we modify the original model using a [sigmoid “switch”](https://en.wikipedia.org/wiki/Sigmoid_function) between *e* and *l* to make the transition differentiable, and use a continuous uniform distribution for the switchpoint $s$. (One could argue this model is more true to reality, as a “switch” in mean rate would likely be stretched out over multiple years.) The new model is thus:
#
# $$
# \begin{align*}
# (D_t|s,e,l)&\sim\text{Poisson}(r_t), \\
# & \,\quad \text{with}\; r_t = e + \frac{1}{1+\exp(s-t)}(l-e) \\
# s&\sim\text{Uniform}(t_l,\,t_h) \\
# e&\sim\text{Exponential}(r_e)\\
# l&\sim\text{Exponential}(r_l)
# \end{align*}
# $$
#
# In the absence of more information we assume $r_e = r_l = 1$ as parameters for the priors. We’ll run both models and compare their inference results.
# + colab={} colab_type="code" id="TGIiP8niPHxr"
def disaster_count_model_switch():
early_disaster_rate = ed.Exponential(rate=1., name='early_disaster_rate')
late_disaster_rate = ed.Exponential(rate=1., name='late_disaster_rate')
switchpoint = ed.Uniform(low=0., high=tf.to_float(len(years)),
name='switchpoint')
def disaster_rate(ys):
return [tf.where(y < switchpoint, early_disaster_rate, late_disaster_rate)
for y in ys]
disaster_count = ed.Poisson(rate=disaster_rate(np.arange(len(years))),
name='disaster_count')
return disaster_count
def disaster_count_model_sigmoid():
early_disaster_rate = ed.Exponential(rate=1., name='early_disaster_rate')
late_disaster_rate = ed.Exponential(rate=1., name='late_disaster_rate')
switchpoint = ed.Uniform(low=0., high=tf.to_float(len(years)),
name='switchpoint')
def disaster_rate(ys):
return (early_disaster_rate +
tf.sigmoid((tf.to_float(ys)-switchpoint)) *
(late_disaster_rate - early_disaster_rate))
disaster_count = ed.Poisson(rate=disaster_rate(np.arange(len(years))),
name='disaster_count')
return disaster_count
log_joint_switch = ed.make_log_joint_fn(disaster_count_model_switch)
log_joint_sigmoid = ed.make_log_joint_fn(disaster_count_model_sigmoid)
def target_log_prob_fn(log_joint, switchpoint, early_disaster_rate, late_disaster_rate):
"""
Pass named parameters to log_joint function; disaster_count is the observed
value hence receives the constant disaster_data.
"""
named_args = {
'switchpoint': switchpoint,
'early_disaster_rate': early_disaster_rate,
'late_disaster_rate': late_disaster_rate,
'disaster_count': disaster_data,
}
return log_joint(**named_args)
# + [markdown] colab_type="text" id="KgPr-m-FSMF2"
# The above code does three things:
#
# 1. Define the model via Edward distributions. The `disaster_rate` functions are called with an array of `[0, ..., len(years)-1]` to produce a vector of `len(years)` random variables – the years before the `switchpoint` are `early_disaster_rate`, the ones after `late_disaster_rate` (modulo the sigmoid transition).
# 1. Define the logprob function for the joint probability distribution. Each of the two function needs to be called with the four named parameters in the model and outputs the log probability of that outcome. (See below for an example.)
# 1. The `target_log_prob_fn` passes the variable elements of the model from the method arguments through to the joint probability function, and specifies the fixed (“observed”) outcomes directly. To avoid duplication and because the method signatures of both logprob functions are the same, we make `target_log_prob_fn` take the desired logprob function as first argument.
#
# Here is a sanity-check that the target log prob function is sane:
# + colab={"height": 72} colab_type="code" executionInfo={"elapsed": 4636, "status": "ok", "timestamp": 1546900520164, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 480} id="OIxZCvGeHkQd" outputId="b009bb47-4119-4c24-b7a5-fcd44a23ea3a"
with tf.Session() as sess:
fs = [log_joint_switch, log_joint_sigmoid]
print([target_log_prob_fn(f, 40., 3., .9).eval() for f in fs]) # Somewhat likely result
print([target_log_prob_fn(f, 60., 1., 5.).eval() for f in fs]) # Rather unlikely result
print([target_log_prob_fn(f, -10., 1., 1.).eval() for f in fs]) # Impossible result
# + [markdown] colab_type="text" id="tuzwBtQxUAES"
# ## HMC to do Bayesian inference
#
# We define the number of results and burn-in steps required; the code is mostly modeled after [the documentation of tfp.mcmc.HamiltonianMonteCarlo](https://www.tensorflow.org/probability/api_docs/python/tfp/mcmc/HamiltonianMonteCarlo). It uses an adaptive step size (otherwise the outcome is very sensitive to the step size value chosen). We use values of one as the initial state of the chain.
#
# This is not the full story though. If you go back to the model definition above, you’ll note that some of the probability distributions are not well-defined on the whole real number line. Therefore we constrain the space that HMC shall examine by wrapping the HMC kernel with a [TransformedTransitionKernel](https://www.tensorflow.org/probability/api_docs/python/tfp/mcmc/TransformedTransitionKernel) that specifies the forward bijectors to transform the real numbers onto the domain that the probability distribution is defined on (see comments in the code below).
# + colab={} colab_type="code" id="_V57TSedc8wb"
num_results = 10000
num_burnin_steps = 3000
def make_chain(i, target_log_prob):
with tf.variable_scope("params", reuse=tf.AUTO_REUSE):
step_size = tf.get_variable(
name='step_size_model{}'.format(i),
initializer=1.,
trainable=False)
step_size_adaptation_step_counter = tf.get_variable(
name='step_size_adaptation_step_counter{}'.format(i),
initializer=-1,
dtype=tf.int32,
trainable=False)
states, _ = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
# The three latent variables
tf.ones([], name='init_switchpoint'),
tf.ones([], name='init_early_disaster_rate'),
tf.ones([], name='init_late_disaster_rate'),
],
kernel=tfp.mcmc.TransformedTransitionKernel(
inner_kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=int(0.8*num_burnin_steps),
step_counter=step_size_adaptation_step_counter),
num_leapfrog_steps=3),
bijector=[
# The switchpoint is constrained between zero and len(years).
# Hence we supply a bijector that maps the real numbers (in a
# differentiable way) to the interval (0;len(yers))
tfb.Chain([tfb.AffineScalar(scale=tf.to_float(len(years))),
tfb.Sigmoid()]),
# Early and late disaster rate: The exponential distribution is
# defined on the positive real numbers
tfb.Softplus(),
tfb.Softplus(),
]))
return states
switchpoint, early_disaster_rate, late_disaster_rate = zip(
make_chain(0, lambda *args: target_log_prob_fn(log_joint_switch, *args)),
make_chain(1, lambda *args: target_log_prob_fn(log_joint_sigmoid, *args)))
switchpoint_, early_disaster_rate_, late_disaster_rate_ = (
[None, None], [None, None], [None, None])
# + [markdown] colab_type="text" id="6QLlqXi1VHLQ"
# Run both models in parallel:
# + colab={} colab_type="code" id="JkDzXzcOq-3k"
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
init_op.run()
[
switchpoint_[0], switchpoint_[1],
early_disaster_rate_[0], early_disaster_rate_[1],
late_disaster_rate_[0], late_disaster_rate_[1],
] = sess.run([
switchpoint[0], switchpoint[1],
early_disaster_rate[0], early_disaster_rate[1],
late_disaster_rate[0], late_disaster_rate[1],
])
# + [markdown] colab_type="text" id="G22O89uhaeKS"
# ## Visualize the result
#
# We visualize the result as histograms of samples of the posterior distribution for the early and late disaster rate, as well as the switchpoint. The histograms are overlaid with a solid line representing the sample median, as well as the 95%ile credible interval bounds as dashed lines.
# + colab={"height": 1596} colab_type="code" executionInfo={"elapsed": 6320, "status": "ok", "timestamp": 1546900700089, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 480} id="ZzSxHMRaXoip" outputId="17af3bdc-95a1-4999-f891-bf466264676d"
def _desc(v):
return '(median: {}; 95%ile CI: $[{}, {}]$)'.format(
*np.round(np.percentile(v, [50, 2.5, 97.5]), 2))
for t, v in [
('Early disaster rate ($e$) posterior samples', early_disaster_rate_),
('Late disaster rate ($l$) posterior samples', late_disaster_rate_),
('Switch point ($s$) posterior samples', years[0] + switchpoint_),
]:
fig, ax = plt.subplots(nrows=1, ncols=2, sharex=True)
for (m, i) in (('Switch', 0), ('Sigmoid', 1)):
a = ax[i]
a.hist(v[i], bins=50)
a.axvline(x=np.percentile(v[i], 50), color='k')
a.axvline(x=np.percentile(v[i], 2.5), color='k', ls='dashed', alpha=.5)
a.axvline(x=np.percentile(v[i], 97.5), color='k', ls='dashed', alpha=.5)
a.set_title(m + ' model ' + _desc(v[i]))
fig.suptitle(t)
plt.show()
|
tensorflow_probability/examples/jupyter_notebooks/Bayesian_Switchpoint_Analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# + [markdown] papermill={"duration": 0.018668, "end_time": "2021-09-11T18:57:27.286602", "exception": false, "start_time": "2021-09-11T18:57:27.267934", "status": "completed"} tags=[]
# # 1. Parameters
# + papermill={"duration": 0.15919, "end_time": "2021-09-11T18:57:27.456231", "exception": false, "start_time": "2021-09-11T18:57:27.297041", "status": "completed"} tags=["parameters"]
# Defaults
## Random seed
random_seed <- 25524
## Directories
simulation_dir <- "simulations/unset"
reference_file <- "simulations/reference/reference.fa.gz"
initial_tree_file <- "input/salmonella.tre"
## Simulation parameters
sub_lambda <- 1e-2
sub_pi_tcag <- c(0.1, 0.2, 0.3, 0.4)
sub_alpha <- 0.2
sub_beta <- sub_alpha/2
sub_mu <- 1
sub_invariant <- 0.3
ins_rate <- 1e-4
ins_max_length <- 60
ins_a <- 1.6
del_rate <- 1e-4
del_max_length <- 60
del_a <- 1.6
## Read simulation information
read_coverage <- 30
read_length <- 250
## Other
ncores <- 48
# + papermill={"duration": 0.066267, "end_time": "2021-09-11T18:57:27.537803", "exception": false, "start_time": "2021-09-11T18:57:27.471536", "status": "completed"} tags=["injected-parameters"]
# Parameters
read_coverage = 30
mincov = 10
simulation_dir = "simulations/alpha-2.0-cov-30"
iterations = 3
sub_alpha = 2.0
# + papermill={"duration": 0.046108, "end_time": "2021-09-11T18:57:27.598819", "exception": false, "start_time": "2021-09-11T18:57:27.552711", "status": "completed"} tags=[]
output_dir <- file.path(simulation_dir, "simulated_data")
output_vcf_prefix <- file.path(output_dir, "haplotypes")
reads_data_initial_prefix <- file.path(output_dir, "reads_initial", "data")
set.seed(random_seed)
print(output_dir)
print(output_vcf_prefix)
# + [markdown] papermill={"duration": 0.009851, "end_time": "2021-09-11T18:57:27.625741", "exception": false, "start_time": "2021-09-11T18:57:27.615890", "status": "completed"} tags=[]
# # 2. Generate simulated data
#
# This simulates *Salmonella* data using a reference genome and a tree.
# + papermill={"duration": 0.108747, "end_time": "2021-09-11T18:57:27.741200", "exception": false, "start_time": "2021-09-11T18:57:27.632453", "status": "completed"} tags=[]
library(jackalope)
# Make sure we've complied with openmp
jackalope:::using_openmp()
# + papermill={"duration": 0.041954, "end_time": "2021-09-11T18:57:27.799058", "exception": false, "start_time": "2021-09-11T18:57:27.757104", "status": "completed"} tags=[]
reference <- read_fasta(reference_file)
reference_len <- sum(reference$sizes())
reference
# + papermill={"duration": 0.090043, "end_time": "2021-09-11T18:57:27.903722", "exception": false, "start_time": "2021-09-11T18:57:27.813679", "status": "completed"} tags=[]
library(ape)
tree <- read.tree(initial_tree_file)
tree <- root(tree, "reference", resolve.root=TRUE)
tree
# + papermill={"duration": 0.218753, "end_time": "2021-09-11T18:57:28.140932", "exception": false, "start_time": "2021-09-11T18:57:27.922179", "status": "completed"} tags=[]
sub <- sub_HKY85(pi_tcag = sub_pi_tcag, mu = sub_mu,
alpha = sub_alpha, beta = sub_beta, gamma_shape=1, gamma_k = 5,
invariant = sub_invariant)
ins <- indels(rate = ins_rate, max_length = ins_max_length, a = ins_a)
del <- indels(rate = del_rate, max_length = del_max_length, a = del_a)
ref_haplotypes <- create_haplotypes(reference, haps_phylo(tree), sub=sub, ins=ins, del=del)
ref_haplotypes
# + [markdown] papermill={"duration": 0.01009, "end_time": "2021-09-11T18:57:28.169287", "exception": false, "start_time": "2021-09-11T18:57:28.159197", "status": "completed"} tags=[]
# # 3. Write simulated data
# + papermill={"duration": 0.153493, "end_time": "2021-09-11T18:57:28.330584", "exception": false, "start_time": "2021-09-11T18:57:28.177091", "status": "completed"} tags=[]
write_vcf(ref_haplotypes, out_prefix=output_vcf_prefix, compress=TRUE)
# + papermill={"duration": 0.426175, "end_time": "2021-09-11T18:57:28.775739", "exception": false, "start_time": "2021-09-11T18:57:28.349564", "status": "completed"} tags=[]
assemblies_prefix = file.path(output_dir, "assemblies", "data")
write_fasta(ref_haplotypes, out_prefix=assemblies_prefix,
compress=TRUE, n_threads=ncores, overwrite=TRUE)
# + papermill={"duration": 317.39663, "end_time": "2021-09-11T19:02:46.183921", "exception": false, "start_time": "2021-09-11T18:57:28.787291", "status": "completed"} tags=[]
n_samples <- length(tree$tip)
n_reads <- round((reference_len * read_coverage * n_samples) / read_length)
print(sprintf("Number of reads for coverage %sX and read length %s over %s samples with respect to reference with length %s: %s",
read_coverage, read_length, n_samples, reference_len, n_reads))
illumina(ref_haplotypes, out_prefix = reads_data_initial_prefix, sep_files=TRUE, n_reads = n_reads,
frag_mean = read_length * 2 + 50, frag_sd = 100,
compress=TRUE, comp_method="bgzip", n_threads=ncores,
paired=TRUE, read_length = read_length)
# + papermill={"duration": 0.066818, "end_time": "2021-09-11T19:02:46.271950", "exception": false, "start_time": "2021-09-11T19:02:46.205132", "status": "completed"} tags=[]
# Remove the simulated reads for the reference genome since I don't want these in the tree
ref1 <- paste(toString(reads_data_initial_prefix), "_reference_R1.fq.gz", sep="")
ref2 <- paste(toString(reads_data_initial_prefix), "_reference_R2.fq.gz", sep="")
if (file.exists(ref1)) {
file.remove(ref1)
print(sprintf("Removing: %s", ref1))
}
if (file.exists(ref2)) {
file.remove(ref2)
print(sprintf("Removing: %s", ref2))
}
# + papermill={"duration": 0.039472, "end_time": "2021-09-11T19:02:46.333913", "exception": false, "start_time": "2021-09-11T19:02:46.294441", "status": "completed"} tags=[]
# Remove the new reference assembly genome since I don't need it
ref1 <- paste(toString(assemblies_prefix), "__reference.fa.gz", sep="")
if (file.exists(ref1)) {
file.remove(ref1)
print(sprintf("Removing: %s", ref1))
}
# + papermill={"duration": 0.011139, "end_time": "2021-09-11T19:02:46.364123", "exception": false, "start_time": "2021-09-11T19:02:46.352984", "status": "completed"} tags=[]
|
evaluations/simulation/1-simulate-reads.simulation-alpha-2.0.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
'''
Exercise 8.3. A string slice can take a third index that specifies the “step size”; that is, the number
of spaces between successive characters. A step size of 2 means every other character; 3 means every
third, etc.
>>> fruit = 'banana'
>>> fruit[0:5:2]
'bnn'
A step size of -1 goes through the word backwards, so the slice [::-1] generates a reversed string.
Use this idiom to write a one-line version of is_palindrome from Exercise 6.3.
'''
# +
def is_palindrome(word):
return word == word[::-1]
print(is_palindrome('bannab'))
print(is_palindrome('banana'))
print(is_palindrome(''))
print(is_palindrome('a'))
# -
|
ch8/ex8_3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="bxkZAOaRIVbh"
# **Proyecto**
# + [markdown] id="snBZ2rIlIY_o"
# Presentado por : <NAME>
# + id="8lS-VC-TIbL5"
import os
import pandas as pd
import numpy as np
import pystan
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras.preprocessing.sequence import TimeseriesGenerator
# + id="Nv9DRRhLIhG6"
link='https://www.datos.gov.co/api/views/gt2j-8ykr/rows.csv?accessType=DOWNLOAD'
# + colab={"base_uri": "https://localhost:8080/"} id="FJV3nzVdIofT" outputId="ceee12a8-1251-42c9-fd11-5ec476051a83"
coronavirus=pd.read_csv(link)
# + colab={"base_uri": "https://localhost:8080/", "height": 382} id="O380CtNiI9zg" outputId="eaadbecf-53ac-44f1-8a2d-f62d73450289"
coronavirus.head()
# + id="_UTarQp8J6Hn"
coronavirus = coronavirus[coronavirus["Nombre municipio"]=="BOGOTA"]
# + colab={"base_uri": "https://localhost:8080/", "height": 382} id="xieT1x-fKSWM" outputId="06270cb0-267b-463d-980e-08e1ba718754"
coronavirus.head()
# + [markdown] id="pFvYk-A1KcPG"
# **Analisis descriptivo**
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="zImZxvh9Kf2M" outputId="d7c1afc5-6208-41c4-a172-a3d417aa33ca"
coronavirus.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="d9U7DDLnKlt9" outputId="3576dceb-3557-48c1-c140-6ec4adcac3f2"
coronavirus.mean
# + id="u1ve0gksK5Ql"
coronavirus=coronavirus['fecha reporte web'].value_counts()
# + id="wW0sNenwLHnL"
coronavirus=pd.DataFrame({'fecha reporte web':coronavirus.index, 'casos':coronavirus.values})
# + colab={"base_uri": "https://localhost:8080/", "height": 424} id="d-m5DcoWLQnL" outputId="822edb93-3488-4496-be3b-093159e2d1bd"
coronavirus
# + id="h0QnCMASLTtz"
df=coronavirus
# + colab={"base_uri": "https://localhost:8080/"} id="rPKjuek6LYmD" outputId="652e3593-b79e-4a03-b1b7-672685f51274"
df.info()
# + id="-8ihOXtUNfuD"
df['fecha reporte web'] = pd.to_datetime(df['fecha reporte web'])
df=df.sort_values(by='fecha reporte web')
# + colab={"base_uri": "https://localhost:8080/", "height": 296} id="Yv25xY0cLfaz" outputId="7e14f7cf-0fdb-4814-fc7f-0fd6112ba843"
sns.lineplot(data=df,y='casos',x='fecha reporte web')
# + id="VbE1ZQVCL0kz"
df_1= np.array(df['casos'].values)
generator = TimeseriesGenerator(df_1,df_1,length=10,batch_size=12)
# + id="JLqqXD3kL6C6"
model = tf.keras.models.Sequential()
model.add(tf.keras.Input(shape=(10,)))
model.add(tf.keras.layers.Dense(8,activation='relu'))
model.add(tf.keras.layers.Dense(6,activation='linear'))
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="NDj4Z8ssL79y" outputId="a42e1ae1-8c44-4130-d6cd-e727598271ce"
tf.keras.utils.plot_model(model,show_shapes=True)
# + id="k5X_2SW5L9up"
model.compile(loss='mse',optimizer='adam')
# + colab={"base_uri": "https://localhost:8080/"} id="NwsMt8JwMALp" outputId="9f826598-f52f-4de7-af5a-664d3ba3d86d"
model.fit_generator(generator,epochs=200)
# + colab={"base_uri": "https://localhost:8080/"} id="qJuAMppiMBvL" outputId="ec766322-8ee0-4a41-d36f-3495117cb7e6"
print(model.summary())
# + colab={"base_uri": "https://localhost:8080/"} id="-dUvTp43MNWp" outputId="09974989-0707-4dec-d202-e950c8447010"
y_pred = model.predict_generator(generator)
# + colab={"base_uri": "https://localhost:8080/", "height": 804} id="lUblwplbMP9Z" outputId="f1f06532-a1a6-46d4-cbb0-35db3b9814a6"
plt.figure(figsize=(18,12))
plt.plot(df_1[10:])
plt.plot(y_pred)
# + id="w48BKpqAMSFL"
|
Proyecto_Jordy_Mesa.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Execute Shell Commands In Python
#
#
# ## Execute Shell Commands with '!'
# To directly execute shell command in Jupyter Notebook, put a "!" before the command
# !echo This is a shell command
# another example
# !ls -lh
# ## Use the magic command to turn the cell into shell cell
# + language="bash"
#
# echo All commands in this cell are considered shell commands
# ls -hl
# pwd
# -
# ## Subprocess package
# The subprocess package allows you execute shell commands within python. It provides two API to execute command: `subprocess.run` and `subprocess.Popen`. Here I only introduce you `subprocess.run`, `subprocess.Popen` will be introcduced later in a advanced section, since it is a lower level API.
# import the run function directly and the PIPE
from subprocess import run, PIPE
# The sleep command will sleep X seconds before return.
# We will use it to mimic any other commands,
# you can imaging change the sleep command into any other commands like salmon quant
# !sleep 1
# ### Run a command just like in the shell
return_object = run('sleep 1', shell=True) # I executed a command here
return_object
# Let's take a look what is returned by the run function
# it is an instance of the subprocess.CompletedProcess class
type(return_object)
# This subprocess.CompletedProcess class has several methods and attributions you need to know
for i in dir(return_object):
if i.startswith('__'):
# some methods start with __ is skipped, don't need to understand them here
# we will talk about this in later advanced sections.
continue
print(i)
# Let's write a function to print them out
def print_return_obj(return_obj):
print(f"""The args was: {return_obj.args}
The returncode was: {return_obj.returncode}
The stderr information was: {return_obj.stderr}
The stdout information was: {return_obj.stdout}""")
print_return_obj(return_object)
# ## Gather the stdout and stderr
# +
# now let's use a compound command that print something into stderr and stdout
# There are three commands:
# 1. sleep 1 sec so you feel it running
# 2. echo something into stdout, the ">&1" redirect information into stdout
# 3. echo something into stderr, the ">&2" redirect information into stderr
# !sleep 1; echo "some stdout information" >&1; echo "some stderr information" >&2
# -
# ### By default, stdout and stderr is not captured
three_command = 'sleep 1; echo "some stdout information" >&1; echo "some stderr information" >&2'
# If you execute the command use run() like this, you will not see any stdout and stderr
return_obj = run(three_command, shell=True)
print_return_obj(return_obj)
# ### Capture the stderr or stdout with PIPE
return_obj = run(three_command, shell=True, stderr=PIPE, stdout=PIPE)
print_return_obj(return_obj)
# Now you can see the information, but the stdout and stderr are bytes but not string
type(return_obj.stderr)
# To get string, you can provide a encode parameter, remember I explained bytes vs string in the [File I/O page](https://hq-1.gitbook.io/essential-python-for-genome-science/data-cleaning/file-i-o).
return_obj = run(three_command, shell=True,
stderr=PIPE, stdout=PIPE, encoding='utf8')
print_return_obj(return_obj)
type(return_obj.stderr)
# ## Check return code
#
# The definition of "success" based on return code is very simple, if return code is 0, the command is successfully finished, otherwise, it's failed.
#
# By default, subprocess.run will not raise any error if the command has non-zero return code, however, you can change this by set check=True
# +
# In this command, I deliberately return a non-zero return code
non_zero_return_code_command = 'sleep 1; exit 1'
return_obj = run(non_zero_return_code_command, shell=True,
stderr=PIPE, stdout=PIPE, encoding='utf8')
print_return_obj(return_obj)
# there is no error, because check=False by default. But the returncode was 1
# +
# If we add check=True
return_obj = run(non_zero_return_code_command, shell=True, check=True,
stderr=PIPE, stdout=PIPE, encoding='utf8')
print_return_obj(return_obj)
# we got a CalledProcessError, and you know the command failed.
# -
# ### Use try except to catch the error and do something arround it
#
# When you got an error, its not the end of the world. You can catch the error and do something to deal with it, for example, try rerun the command automatically, or delete the temporary files to prevent leaving incomplete results
# +
# In order to catch this special error defined in the subprocess package, you need to import it first
from subprocess import CalledProcessError
try:
return_obj = run(non_zero_return_code_command, shell=True, check=True,
stderr=PIPE, stdout=PIPE, encoding='utf8')
print_return_obj(return_obj)
except CalledProcessError as error:
# the error also contain informations about the process
print('The command has returncode:', error.returncode)
print('Now we can do something about this error, like delete temporary files or try rerun it.')
print('Once you done the clean up, you can still raise the error to alert users')
raise error
# -
# ## Provide command list when shell=False
#
# All the above commands are provided as a string, and I set the shell=True. This is actually not the default way to use run(). The default way to use run() is shell=False, and you have to provide the command as a list but not string
# +
simple_command = 'sleep 1'
# we set shell=False, which is the default
return_obj = run(simple_command, shell=False, check=True,
stderr=PIPE, stdout=PIPE, encoding='utf8')
print_return_obj(return_obj)
# This error is not really about "FileNotFoundError", but becuase the command was not parsed correctly.
# When shell=False, the run() expect command provided as a list
# +
simple_command_list = ['sleep', '1']
return_obj = run(simple_command_list, shell=False, check=True,
stderr=PIPE, stdout=PIPE, encoding='utf8')
print_return_obj(return_obj)
# Now it's OK
# -
# ## OPTIONAL - Why the shell=False is default?
# The reason shell=False is default is due to some [security considerations](https://docs.python.org/3.8/library/subprocess.html#security-considerations). This is something package developer need to pay attention, however, if you only execute some code that you generated by yourself, it's OK to use shell=True. But following the python official documentation may also be a good choice, it just needs you provide your command as a list.
# Another drawback with shell=False is that you can not use the UNIX pipe or redirect like the example above. In order to use them, you need use the lower level API subprocee.Popen, which will be explained in a later section.
#
# Or, you need to set shell=True.
# +
import shlex # shlex.split is a clever spliter for shell command
command_with_redirect = 'echo "some stderr information" >&2'
return_obj = run(shlex.split(command_with_redirect), shell=False, check=True,
stderr=PIPE, stdout=PIPE, encoding='utf8') # shlex.split is a clever spliter for shell command
print_return_obj(return_obj)
# the output should be in stderr, this is not correct result.
# Because run() do not support redirect like this when shell=False
# +
# if shell=True
return_obj = run(command_with_redirect, shell=True, check=True,
stderr=PIPE, stdout=PIPE, encoding='utf8') # shlex.split is a clever spliter for shell command
print_return_obj(return_obj)
# see the difference? The information is printed out in stderr but not stdout
# -
# ## Take home message
#
# - subprocess.run() is the most common API for running shell command in python.
# - set stderr=PIPE and stdout=PIPE to capture the information printed in these two system file handles
# - set encoding='utf8' to make sure you got string but not bytes from stderr and stdout
# - Returncode == 0 means job succeed, otherwise means failure. If check=True, non-zero return code triggers subprocess.CalledProcessError
# - Try ... Except ... allows you catch an error and deal with it
# - By default, shell=False and you need to provide command as a list, redirct and pipe will not work.
# - When shell=True, you can provide command as a string and any command works just like in shell
|
analysis/python_basic/execute_shell_commands_in_python.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Zip Folder
#
# Use this notebook to create a compressed archive of any folder in the project (or the entire project). The ouput can either be a 'zip' or a 'tar.gz' archive. The latter preserves file permissions, which is necessary for some visualizations to work when the archive is extracted. If you are only compressing data, you can use the 'zip' format.
# ## Setup
# +
# Python imports
import glob
import os
import shutil
import tarfile
from IPython.display import display, HTML
# Archive function
def make_archive(source, destination):
if destination.endswith('tar.gz'):
with tarfile.open(output_filepath,'w:gz') as tar:
for file in glob.glob(source_directory):
tar.add(file)
else:
base = os.path.basename(destination)
name = base.split('.')[0]
format = base.split('.')[1]
archive_from = os.path.dirname(source)
archive_to = os.path.basename(source.strip(os.sep))
shutil.make_archive(name, format, archive_from, archive_to)
shutil.move('%s.%s'%(name,format), destination)
# -
# ## Configuration
#
# The `source_directory` is the file path to the directory you wish to compress.
# The `output_filepath` is the path to the compressed output file, including the filename (e.g. 'myfolder.zip').
#
# By default, your output file will be a 'zip' archive (it should end with `.zip`). If you use the file extensions `tar.gz`, the compressed output file will be saved in the 'tar.gz' format instead.
source_directory = '' # Path to directory to compress
output_filepath = '' # Path to archive file to save, including the filename
# ## Zip the Folder
#
# Run the cell below to create the compressed archive file.
# make_archive(source_directory, output_filepath)
try:
make_archive(source_directory, output_filepath)
display(HTML('<p style="color: green;">The zip archive was saved to ' + output_filepath + '.</p>'))
except IOError:
display(HTML('<p style="color: red;">An unknown error occurred. The zip file could not be saved.</p>'))
|
src/templates/v0.1.9/modules/utilities/zip_folder.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# +
info = pd.read_csv('../data/cleaned/info.csv')
info = info.set_index('dbn')
directory = pd.read_csv('../data/2020_DOE_High_School_Directory.csv')
metrics = pd.read_csv('../data/metrics_2019.csv')
metrics = metrics.set_index('dbn')
school_names = pd.read_csv('../data/cleaned/school_names.csv')
school_names = school_names.set_index('dbn')
# +
folder = '../data/cleaned/'
applicants = pd.read_csv(folder + 'applicants.csv')
applicants = applicants.set_index('dbn')
avg_applicants = pd.read_csv(folder + 'avg_applicants.csv')
avg_applicants = avg_applicants.set_index('dbn')
min_applicants = pd.read_csv(folder + 'min_applicants.csv')
min_applicants = min_applicants.set_index('dbn')
max_applicants = pd.read_csv(folder + 'max_applicants.csv')
max_applicants = max_applicants.set_index('dbn')
offers = pd.read_csv(folder + 'offers.csv')
offers = offers.set_index('dbn')
avg_offers = pd.read_csv(folder + 'avg_offers.csv')
avg_offers = avg_offers.set_index('dbn')
min_offers = pd.read_csv(folder + 'min_offers.csv')
min_offers = min_offers.set_index('dbn')
max_offers = pd.read_csv(folder + 'max_offers.csv')
max_offers = max_offers.set_index('dbn')
# -
cols = ['Total', 'Asian','Black','Hispanic','White']
# ### Offer Rates
# #### Individual Schools
avg_offers.div(avg_applicants)[cols]
# #### All Schools
avg_offers.sum().div(avg_applicants.sum())[cols]
# #### Top Ranking Schools
top = metrics[metrics.eq('Far Above Average').sum(axis=1).gt(4)].index
print(len(top))
top = [dbn for dbn in top if dbn in info.index]
print(len(top))
avg_offers.loc[top].sum().div(avg_applicants.loc[top].sum())[cols]
# +
df_info = pd.DataFrame([['All Screened Schools', len(offers), applicants['Total'].sum(), offers['Total'].sum()],
['"Top-Ranked" Screened Schools', len(offers.loc[top]), applicants.loc[top]['Total'].sum(), offers.loc[top]['Total'].sum()]], columns=['index','No. of Schools', 'No. of Applicants', 'No. of Offers'])
df_info = df_info.set_index('index')
df = pd.DataFrame()
df['All Screened Schools'] = avg_offers.sum().div(avg_applicants.sum()).mul(100).round(1)[cols]
df['"Top-Ranked" Screened Schools'] = avg_offers.loc[top].sum().div(avg_applicants.loc[top].sum()).mul(100).round(1)[cols]
df = df.T
df.columns = [col.replace('Hispanic', 'Latino').replace('Total','Overall') + " Offer Rate (%)" for col in df.columns]
display(df)
df_info.join(df, how='outer').to_csv('../output/all-vs-top-ranked-rates.csv')
df_info.join(df, how='outer')
# -
for col in cols:
count = avg_offers.loc[top].sum()[col]
nobs = avg_applicants.loc[top].sum()[col]
prop_var = min_offers.loc[top].sum()[col] / max_applicants.loc[top].sum()[col]
value = 0.05
stats, pval = proportions_ztest(count, nobs, value, prop_var = prop_var)
print(col, '{0:0.3f}'.format(pval))
prop_var = max_offers.loc[top].sum()[col] / min_applicants.loc[top].sum()[col]
stats, pval = proportions_ztest(count, nobs, value, prop_var = prop_var)
print(col, '{0:0.3f}'.format(pval))
print(round(avg_offers.loc[top].sum()[col] / avg_applicants.loc[top].sum()[col] * 100, 1),
round(min_offers.loc[top].sum()[col] / max_applicants.loc[top].sum()[col] * 100, 1),
round(max_offers.loc[top].sum()[col] / min_applicants.loc[top].sum()[col] * 100, 1))
# ### Demographic Breakdowns of Applicants and Offers
# #### Individual Schools
# +
redacted_offers = offers.eq('s') | offers.eq('s^')
redacted_applicants = applicants.eq('s') | applicants.eq('s^')
df = pd.DataFrame()
df['School Name'] = school_names['school_name']
df['# Applicants'] = avg_applicants['Total']
df[['% Asian Applicants', '% Black Applicants', '% Latino Applicants', '% White Applicants']] = avg_applicants[['Asian', 'Black', 'Hispanic', 'White']].div(avg_applicants['Total'],axis='index').mul(100).round(1)
df[['% Asian Applicants - Estimate Off By (+/-)', '% Black Applicants - Estimate Off By (+/-)', '% Latino Applicants - Estimate Off By (+/-)', '% White Applicants - Estimate Off By (+/-)']] = max_applicants.sub(avg_applicants).div(avg_applicants['Total'], axis='index').mul(100).round(1)[['Asian', 'Black','Hispanic','White']]
df[['# Offers']] = avg_offers['Total']
df[['% Asian Offers', '% Black Offers', '% Latino Offers', '% White Offers']] = avg_offers[['Asian', 'Black', 'Hispanic', 'White']].div(avg_offers['Total'],axis='index').mul(100).round(1)
df[['% Asian Offers - Estimate Off By (+/-)', '% Black Offers - Estimate Off By (+/-)', '% Latino Offers - Estimate Off By (+/-)', '% White Offers - Estimate Off By (+/-)']] = max_offers.sub(avg_offers).div(avg_offers['Total'], axis='index').mul(100).round(1)[['Asian', 'Black','Hispanic','White']]
column_order = ['School Name']
column_order.extend(sorted([col for col in df if 'Applicants' in col]))
column_order.extend(sorted([col for col in df if 'Offers' in col]))
df[column_order].to_csv('../output/pct-applicants-offers-stats.csv')
df[column_order]
# -
# #### Combined: All Schools and Top Ranking Schools
# +
df = pd.DataFrame()
top = metrics[metrics.eq('Far Above Average').sum(axis=1).gt(4)].index
top = [dbn for dbn in top if dbn in info.index]
groups = [('Top-Ranked Screened High Schools', top),
('All Screened High Schools', info.index)]
rows = []
for (title, dbns) in groups:
row = []
a = avg_applicants.loc[dbns].sum()[['Total', 'Asian', 'Black', 'Hispanic', 'White']]
o = avg_offers.loc[dbns].sum()[['Total', 'Asian', 'Black', 'Hispanic', 'White']]
arr = max_applicants.loc[dbns].sum()[['Total', 'Asian', 'Black', 'Hispanic', 'White']]
#print(arr - a)
orr = max_offers.loc[dbns].sum()[['Total', 'Asian', 'Black', 'Hispanic', 'White']]
print((arr - a).div(a['Total']).mul(100).round(1))
print()
print(count)
row.append(title)
row.append(len(dbns))
row.append(a['Total'])
row.extend(list(a[['Asian', 'Black', 'Hispanic', 'White']].div(a['Total']).mul(100).round(1)))
row.extend(list((arr - a).div(a['Total']).mul(100).round(1)[['Asian', 'Black', 'Hispanic', 'White']]))
row.append(o['Total'])
row.extend(list(o[['Asian', 'Black', 'Hispanic', 'White']].div(o['Total']).mul(100).round(1)))
row.extend(list((orr - o).div(o['Total']).mul(100).round(1)[['Asian', 'Black', 'Hispanic', 'White']]))
rows.append(row)
columns = [
'Category',
'# Schools',
'# Applicants',
'% Asian Applicants',
'% Black Applicants',
'% Latino Applicants',
'% White Applicants',
'% Asian Applicants - Estimate Off By (+/-)',
'% Black Applicants - Estimate Off By (+/-)',
'% Latino Applicants - Estimate Off By (+/-)',
'% White Applicants - Estimate Off By (+/-)',
'# Offers',
'% Asian Offers',
'% Black Offers',
'% Latino Offers',
'% White Offers',
'% Asian Offers - Estimate Off By (+/-)',
'% Black Offers - Estimate Off By (+/-)',
'% Latino Offers - Estimate Off By (+/-)',
'% White Offers - Estimate Off By (+/-)'
]
col_order = ['Category', '# Schools']
col_order.extend(sorted([col for col in columns if 'Applicants' in col]))
col_order.extend(sorted([col for col in columns if 'Offers' in col]))
df = pd.DataFrame(rows, columns=columns)[col_order]
df.to_csv('../output/all-vs-top-ranked-shares.csv', index=False)
df
# -
pd.DataFrame(rows, columns=columns)[col_order].set_index('Category')
# "Top-Ranked" Screened Schools Demographics
d = pd.DataFrame()
a = avg_applicants.loc[top].sum()
a = a.rename({'Hispanic':'Latino'})
o = avg_offers.loc[top].sum()
o = o.rename({'Hispanic':'Latino'})
d['Applicants (%)'] = a.div(a['Total']).mul(100).round(1)[['Asian', 'Black', 'Latino', 'White']]
d['Offers (%)'] = o.div(o['Total']).mul(100).round(1)[['Asian', 'Black', 'Latino', 'White']]
d['Difference (%)'] = d['Offers (%)'].sub(d['Applicants (%)']).round(1)
d.to_csv('../output/top-ranked-demo-differences.csv')
display(d)
|
notebooks/2-share-and-offer-calculations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="3pkUd_9IZCFO"
# # TFRecord and tf.Example
#
# **Learning Objectives**
#
# 1. Understand the TFRecord format for storing data
# 2. Understand the tf.Example message type
# 3. Read and Write a TFRecord file
#
#
# ## Introduction
#
# In this notebook, you create, parse, and use the `tf.Example` message, and then serialize, write, and read `tf.Example` messages to and from `.tfrecord` files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.
#
#
# Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/tfrecord-tf.example.ipynb).
#
#
#
#
# + [markdown] colab_type="text" id="Ac83J0QxjhFt"
# ### The TFRecord format
#
# The TFRecord format is a simple format for storing a sequence of binary records. [Protocol buffers](https://developers.google.com/protocol-buffers/) are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by `.proto` files, these are often the easiest way to understand a message type.
#
# The `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": value}` mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as [TFX](https://www.tensorflow.org/tfx/).
# Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using [`tf.data`](https://www.tensorflow.org/guide/datasets) and reading data is still the bottleneck to training. See [Data Input Pipeline Performance](https://www.tensorflow.org/datasets/performances) for dataset performance tips.
# + [markdown] colab_type="text" id="WkRreBf1eDVc"
# ## Load necessary libraries
# We will start by importing the necessary libraries for this lab.
# -
# !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# + colab={} colab_type="code" id="Ja7sezsmnXph"
# #!pip install --upgrade tensorflow==2.5
import tensorflow as tf
import numpy as np
import IPython.display as display
print("TensorFlow version: ",tf.version.VERSION)
# -
# Please ignore any incompatibility warnings and errors.
#
# + [markdown] colab_type="text" id="e5Kq88ccUWQV"
# ## `tf.Example`
# + [markdown] colab_type="text" id="VrdQHgvNijTi"
# ### Data types for `tf.Example`
# + [markdown] colab_type="text" id="lZw57Qrn4CTE"
# Fundamentally, a `tf.Example` is a `{"string": tf.train.Feature}` mapping.
#
# The `tf.train.Feature` message type can accept one of the following three types (See the [`.proto` file](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto) for reference). Most other generic types can be coerced into one of these:
#
# 1. `tf.train.BytesList` (the following types can be coerced)
#
# - `string`
# - `byte`
#
# 1. `tf.train.FloatList` (the following types can be coerced)
#
# - `float` (`float32`)
# - `double` (`float64`)
#
# 1. `tf.train.Int64List` (the following types can be coerced)
#
# - `bool`
# - `enum`
# - `int32`
# - `uint32`
# - `int64`
# - `uint64`
# + [markdown] colab_type="text" id="_e3g9ExathXP"
# **Lab Task #1a:** In order to convert a standard TensorFlow type to a `tf.Example`-compatible `tf.train.Feature`, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a `tf.train.Feature` containing one of the three `list` types above. Complete the `TODOs` below using these types.
# + colab={} colab_type="code" id="mbsPOUpVtYxA"
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
if isinstance(value, type(tf.constant(0))):
value = value.numpy() # BytesList won't unpack a string from an EagerTensor.
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
# + [markdown] colab_type="text" id="Wst0v9O8hgzy"
# Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use `tf.serialize_tensor` to convert tensors to binary-strings. Strings are scalars in tensorflow. Use `tf.parse_tensor` to convert the binary-string back to a tensor.
# + [markdown] colab_type="text" id="vsMbkkC8xxtB"
# Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. `_int64_feature(1.0)` will error out, since `1.0` is a float, so should be used with the `_float_feature` function instead):
# + colab={} colab_type="code" id="hZzyLGr0u73y"
print(_bytes_feature(b'test_string'))
print(_bytes_feature(u'test_bytes'.encode('utf-8')))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
# + [markdown] colab_type="text" id="nj1qpfQU5qmi"
# **Lab Task #1b:** All proto messages can be serialized to a binary-string using the `.SerializeToString` method. Use this method to complete the below `TODO`:
# + colab={} colab_type="code" id="5afZkORT5pjm"
feature = _float_feature(np.exp(1))
feature = _float_feature(np.exp(1))
# `SerializeToString()` serializes the message and returns it as a string
feature.SerializeToString()
# + [markdown] colab_type="text" id="laKnw9F3hL-W"
# ### Creating a `tf.Example` message
# + [markdown] colab_type="text" id="b_MEnhxchQPC"
# Suppose you want to create a `tf.Example` message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the `tf.Example` message from a single observation will be the same:
#
# 1. Within each observation, each value needs to be converted to a `tf.train.Feature` containing one of the 3 compatible types, using one of the functions above.
#
# 1. You create a map (dictionary) from the feature name string to the encoded feature value produced in #1.
#
# 1. The map produced in step 2 is converted to a [`Features` message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/feature.proto#L85).
# + [markdown] colab_type="text" id="4EgFQ2uHtchc"
# In this notebook, you will create a dataset using NumPy.
#
# This dataset will have 4 features:
#
# * a boolean feature, `False` or `True` with equal probability
# * an integer feature uniformly randomly chosen from `[0, 5]`
# * a string feature generated from a string table by using the integer feature as an index
# * a float feature from a standard normal distribution
#
# Consider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
# + colab={} colab_type="code" id="CnrguFAy3YQv"
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat'])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
# + [markdown] colab_type="text" id="aGrscehJr7Jd"
# Each of these features can be coerced into a `tf.Example`-compatible type using one of `_bytes_feature`, `_float_feature`, `_int64_feature`. You can then create a `tf.Example` message from these encoded features:
# + colab={} colab_type="code" id="RTCS49Ij_kUw"
def serialize_example(feature0, feature1, feature2, feature3):
"""
Creates a tf.Example message ready to be written to a file.
"""
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
'feature0': _int64_feature(feature0),
'feature1': _int64_feature(feature1),
'feature2': _bytes_feature(feature2),
'feature3': _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
# + [markdown] colab_type="text" id="XftzX9CN_uGT"
# For example, suppose you have a single observation from the dataset, `[False, 4, bytes('goat'), 0.9876]`. You can create and print the `tf.Example` message for this observation using `create_message()`. Each single observation will be written as a `Features` message as per the above. Note that the `tf.Example` [message](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/example/example.proto#L88) is just a wrapper around the `Features` message:
# + colab={} colab_type="code" id="N8BtSx2RjYcb"
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b'goat', 0.9876)
serialized_example
# + [markdown] colab_type="text" id="_pbGATlG6u-4"
# **Lab Task #1c:** To decode the message use the `tf.train.Example.FromString` method and complete the below TODO
# + colab={} colab_type="code" id="dGim-mEm6vit"
# TODO 1c
example_proto = tf.train.Example.FromString(serialized_example)
example_proto
# + [markdown] colab_type="text" id="o6qxofy89obI"
# ## TFRecords format details
#
# A TFRecord file contains a sequence of records. The file can only be read sequentially.
#
# Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.
#
# Each record is stored in the following formats:
#
# uint64 length
# uint32 masked_crc32_of_length
# byte data[length]
# uint32 masked_crc32_of_data
#
# The records are concatenated together to produce the file. CRCs are
# [described here](https://en.wikipedia.org/wiki/Cyclic_redundancy_check), and
# the mask of a CRC is:
#
# masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
#
# Note: There is no requirement to use `tf.Example` in TFRecord files. `tf.Example` is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using `tf.io.serialize_tensor`, and
# `tf.io.parse_tensor` when loading). See the `tf.io` module for more options.
# + [markdown] colab_type="text" id="y-Hjmee-fbLH"
# ## TFRecord files using `tf.data`
# + [markdown] colab_type="text" id="GmehkCCT81Ez"
# The `tf.data` module also provides tools for reading and writing data in TensorFlow.
# + [markdown] colab_type="text" id="1FISEuz8ubu3"
# ### Writing a TFRecord file
#
# The easiest way to get the data into a dataset is to use the `from_tensor_slices` method.
#
# Applied to an array, it returns a dataset of scalars:
# + colab={} colab_type="code" id="mXeaukvwu5_-"
tf.data.Dataset.from_tensor_slices(feature1)
# + [markdown] colab_type="text" id="f-q0VKyZvcad"
# Applied to a tuple of arrays, it returns a dataset of tuples:
# + colab={} colab_type="code" id="H5sWyu1kxnvg"
features_dataset = tf.data.Dataset.from_tensor_slices((feature0, feature1, feature2, feature3))
features_dataset
# + colab={} colab_type="code" id="m1C-t71Nywze"
# Use `take(1)` to only pull one example from the dataset.
for f0,f1,f2,f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
# + [markdown] colab_type="text" id="mhIe63awyZYd"
# Use the `tf.data.Dataset.map` method to apply a function to each element of a `Dataset`.
#
# The mapped function must operate in TensorFlow graph mode—it must operate on and return `tf.Tensors`. A non-tensor function, like `serialize_example`, can be wrapped with `tf.py_function` to make it compatible.
#
# **Lab Task 2a:** Using `tf.py_function` requires to specify the shape and type information that is otherwise unavailable:
# + colab={} colab_type="code" id="apB5KYrJzjPI"
# TODO 2a
def tf_serialize_example(f0,f1,f2,f3):
tf_string = tf.py_function(
serialize_example,
(f0,f1,f2,f3), # pass these args to the above function.
tf.string) # the return type is `tf.string`.
return tf.reshape(tf_string, ()) # The result is a scalar
# + colab={} colab_type="code" id="lHFjW4u4Npz9"
tf_serialize_example(f0,f1,f2,f3)
# + [markdown] colab_type="text" id="CrFZ9avE3HUF"
# **Lab Task 2b:** Apply this function to each element in the features_dataset using the map function and complete below `TODO`:
# + colab={} colab_type="code" id="VDeqYVbW3ww9"
# TODO 2b
serialized_features_dataset = features_dataset.map(tf_serialize_example)
serialized_features_dataset
# + colab={} colab_type="code" id="DlDfuh46bRf6"
def generator():
for features in features_dataset:
yield serialize_example(*features)
# + colab={} colab_type="code" id="iv9oXKrcbhvX"
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=())
# + colab={} colab_type="code" id="Dqz8C4D5cIj9"
serialized_features_dataset
# + [markdown] colab_type="text" id="p6lw5VYpjZZC"
# And write them to a TFRecord file:
# + colab={} colab_type="code" id="vP1VgTO44UIE"
filename = 'test.tfrecord'
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
# + [markdown] colab_type="text" id="6aV0GQhV8tmp"
# ### Reading a TFRecord file
# + [markdown] colab_type="text" id="o3J5D4gcSy8N"
# You can also read the TFRecord file using the `tf.data.TFRecordDataset` class.
#
# More information on consuming TFRecord files using `tf.data` can be found [here](https://www.tensorflow.org/guide/data#consuming_tfrecord_data).
#
# **Lab Task 2c:** Complete the below TODO by using `TFRecordDataset`s which is useful for standardizing input data and optimizing performance.
# + colab={} colab_type="code" id="6OjX6UZl-bHC"
# TODO 2c
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
# + [markdown] colab_type="text" id="6_EQ9i2E_-Fz"
# At this point the dataset contains serialized `tf.train.Example` messages. When iterated over it returns these as scalar string tensors.
#
# Use the `.take` method to only show the first 10 records.
#
# Note: iterating over a `tf.data.Dataset` only works with eager execution enabled.
# + colab={} colab_type="code" id="hxVXpLz_AJlm"
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
# + [markdown] colab_type="text" id="W-6oNzM4luFQ"
# These tensors can be parsed using the function below. Note that the `feature_description` is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
# + colab={} colab_type="code" id="zQjbIR1nleiy"
# Create a description of the features.
feature_description = {
'feature0': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature1': tf.io.FixedLenFeature([], tf.int64, default_value=0),
'feature2': tf.io.FixedLenFeature([], tf.string, default_value=''),
'feature3': tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
# + [markdown] colab_type="text" id="gWETjUqhEQZf"
# Alternatively, use `tf.parse example` to parse the whole batch at once. Apply this function to each item in the dataset using the `tf.data.Dataset.map` method:
# + colab={} colab_type="code" id="6Ob7D-zmBm1w"
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
# + [markdown] colab_type="text" id="sNV-XclGnOvn"
# Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a `tf.Tensor`, and the `numpy` element of this tensor displays the value of the feature:
# + colab={} colab_type="code" id="x2LT2JCqhoD_"
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
# + [markdown] colab_type="text" id="Cig9EodTlDmg"
# Here, the `tf.parse_example` function unpacks the `tf.Example` fields into standard tensors.
# + [markdown] colab_type="text" id="jyg1g3gU7DNn"
# ## TFRecord files in Python
# + [markdown] colab_type="text" id="3FXG3miA7Kf1"
# The `tf.io` module also contains pure-Python functions for reading and writing TFRecord files.
# + [markdown] colab_type="text" id="CKn5uql2lAaN"
# ### Writing a TFRecord file
# + [markdown] colab_type="text" id="LNW_FA-GQWXs"
# Next, write the 10,000 observations to the file `test.tfrecord`. Each observation is converted to a `tf.Example` message, then written to file. You can then verify that the file `test.tfrecord` has been created:
# + colab={} colab_type="code" id="MKPHzoGv7q44"
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i])
writer.write(example)
# + colab={} colab_type="code" id="EjdFHHJMpUUo"
# !du -sh {filename}
# + [markdown] colab_type="text" id="2osVRnYNni-E"
# ### Reading a TFRecord file
#
# These serialized tensors can be easily parsed using `tf.train.Example.ParseFromString`:
# + colab={} colab_type="code" id="U3tnd3LerOtV"
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
# + colab={} colab_type="code" id="nsEAACHcnm3f"
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
# + [markdown] colab_type="text" id="S0tFDrwdoj3q"
# ## Walkthrough: Reading and writing image data
# + [markdown] colab_type="text" id="rjN2LFxFpcR9"
# This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.
#
# This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.
#
# First, let's download [this image](https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg) of a cat in the snow and [this photo](https://upload.wikimedia.org/wikipedia/commons/f/fe/New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg) of the Williamsburg Bridge, NYC under construction.
# + [markdown] colab_type="text" id="5Lk2qrKvN0yu"
# ### Fetch the images
# + colab={} colab_type="code" id="3a0fmwg8lHdF"
cat_in_snow = tf.keras.utils.get_file('320px-Felis_catus-cat_on_snow.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg')
williamsburg_bridge = tf.keras.utils.get_file('194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg')
# + colab={} colab_type="code" id="7aJJh7vENeE4"
display.display(display.Image(filename=cat_in_snow))
display.display(display.HTML('Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'))
# + colab={} colab_type="code" id="KkW0uuhcXZqA"
display.display(display.Image(filename=williamsburg_bridge))
display.display(display.HTML('<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'))
# + [markdown] colab_type="text" id="VSOgJSwoN5TQ"
# ### Write the TFRecord file
# + [markdown] colab_type="text" id="Azx83ryQEU6T"
# As before, encode the features as types compatible with `tf.Example`. This stores the raw image string feature, as well as the height, width, depth, and arbitrary `label` feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use `0` for the cat_in_snow image, and `1` for the williamsburg_bridge image.
# + colab={} colab_type="code" id="kC4TS1ZEONHr"
image_labels = {
cat_in_snow : 0,
williamsburg_bridge : 1,
}
# + colab={} colab_type="code" id="c5njMSYNEhNZ"
# This is an example, just using the cat image.
image_string = open(cat_in_snow, 'rb').read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
'height': _int64_feature(image_shape[0]),
'width': _int64_feature(image_shape[1]),
'depth': _int64_feature(image_shape[2]),
'label': _int64_feature(label),
'image_raw': _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split('\n')[:15]:
print(line)
print('...')
# + [markdown] colab_type="text" id="2G_o3O9MN0Qx"
# Notice that all of the features are now stored in the `tf.Example` message. Next, functionalize the code above and write the example messages to a file named `images.tfrecords`:
# + colab={} colab_type="code" id="qcw06lQCOCZU"
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = 'images.tfrecords'
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, 'rb').read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
# + colab={} colab_type="code" id="yJrTe6tHPCfs"
# !du -sh {record_file}
# + [markdown] colab_type="text" id="jJSsCkZLPH6K"
# ### Read the TFRecord file
#
# You now have the file—`images.tfrecords`—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely `example.features.feature['image_raw'].bytes_list.value[0]`. You can also use the labels to determine which record is the cat and which one is the bridge:
# + colab={} colab_type="code" id="M6Cnfd3cTKHN"
raw_image_dataset = tf.data.TFRecordDataset('images.tfrecords')
# Create a dictionary describing the features.
image_feature_description = {
'height': tf.io.FixedLenFeature([], tf.int64),
'width': tf.io.FixedLenFeature([], tf.int64),
'depth': tf.io.FixedLenFeature([], tf.int64),
'label': tf.io.FixedLenFeature([], tf.int64),
'image_raw': tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
# + [markdown] colab_type="text" id="0PEEFPk4NEg1"
# Recover the images from the TFRecord file:
# + colab={} colab_type="code" id="yZf8jOyEIjSF"
for image_features in parsed_image_dataset:
image_raw = image_features['image_raw'].numpy()
display.display(display.Image(data=image_raw))
# -
# Copyright 2020 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
preparing-gcp-ml-engineer/introduction-to-tensorflow/tfrecord-tf.example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MNIST Dataset for Image recognition with Deep Learning
# _________________
# ##### By: <NAME>
#
# Using this notebook to learn about using deep networks for image recognition
# MNIST is 28x28 images of hand-written digits 0-9
# +
import tensorflow as tf
## Import the dataset and unpack it into x and y train and test
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
# +
import matplotlib.pyplot as plt
plt.imshow(x_train[0], cmap=plt.cm.binary)
plt.show
print(x_train[0])
# -
# Here we can see the first elemment in x_train. By ploting it using matplotlib it is clear that the element is a 5. The array representation of the 28x28 image involves using 0-255 which is in accordance with pixles and RBG standards.
#
# For the puprose of training the NN we want to normalize the data scalaing it to values between 0-1 instead of 0-255
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
# +
import matplotlib.pyplot as plt
plt.imshow(x_train[0], cmap=plt.cm.binary)
plt.show
print(x_train[0])
# -
# By reploting and looking at the array representation we can see how the values have been scaled to between 0-1 and the 5 although readable has been altered.
# We will want to make a Sequential or feed forward network. The first layed will be a flatten layed ot flatten the array of 28x28. We will use the relu activation for the two hidden layers. The last layer will use softmax for a probability distribution.
#
# The optomizer will be adam and the loss function will be
# +
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3)
# -
# We can see the in sample accuracy is 97% so we must vaidate with out of sample data
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
# The 97% out of sample is solid as the model is not overfitted to in sample data and was successful at generlizaing.
|
MNIST_NN.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# # Lab 2: set up a numerical hydrostatic equilibrium
#
# Follow all of the steps below. You will need to write your own code where prompted (e.g., to calculate enclosed mass and the pressure profile). Please answer all questions, and make plots where requested. For grading, it would be useful to write short comments like those provided to explain what your code is doing.
#
# Collaborate in teams, ask your classmates, and reach out to the teaching team when stuck!
# ### define a set of zone locations in radius
# +
# number of zone centers
n_zones = 128
# inner and outer radius in cm
r_inner = 1e8
r_outer = 1e12
# calculate the radius of each zone *interface*, with the innermost interface at r_inner and the outermost at r_outer
r_I = r_inner*10**(np.arange(n_zones+1)/n_zones*np.log10(r_outer/r_inner))
# now use the interface locations to calculate the zone centers, halfway in between inner/outer interface
# this is the size of each zone
Delta_r = r_I[1:]-r_I[:-1]
# this is the set of zone centers
r_zones = r_I[:-1]+Delta_r/2.
# -
# let's visualize the grid, choosing every 4th point
# note that the plot is on a log scale in x, while the zone centers are supposed to be midway between the interfaces
# as defined on a linear scale
for rr in r_I[::4]:
plt.semilogx(rr+np.zeros(50),np.arange(50)/49.,linestyle='-',color='k')
plt.semilogx(r_zones[1::4],np.zeros_like(r_zones[1::4])+0.5,marker='o',linestyle='',markersize=10)
# ### set a "power law" density profile $\rho(r) \propto r^{-2}$
# +
# let the inner density be some arbitrary value in g cm^-3, here a value typical of Sun-like stars on the main sequence
rho0 = 1e2
# calculate the density profile at zone centers
rho_zones = rho0*(r_zones/r_inner)**(-2.)
# -
# ### 1. calculate the mass enclosed in each zone and the initial net velocity
# +
m_zones =
v_zones = np.zeros_like(rho_zones)
# -
# ### 2. use the discretized hydrostatic equilibrium equation to calculate the pressure at each interface
# --think about how to do the calculation one zone at a time, going backwards from the outer zone
#
# --what is the pressure at the outer boundary? (the "boundary condition" needed to solve the differential equation)
# +
# solve for P_I
P_I = np.zeros_like(r_I)
# -
# ### 3. test how well our differenced equation works
#
# Compare the left hand side and right hand side of the numerical hydrostatic equilibrium equation. Measure the error as for example
#
# |left hand side - right hand side| / |left hand side|
# +
# enter code here
# -
# ### 4. calculate P at zone centers
P_zones =
# ## now we want to put this set of fluid variables (along with v = 0) into the time-dependent hydro code!
# ### 5. make a prediction: when we do this, what do you expect for the behavior of rho(r), P(r), v(r) as functions of time?
# write your prediction here in 1-2 sentences
# ### now let's setup a hydro problem using the data generated above
#
# the cell below is defining a hydrodynamics problem using our hydro3.py code, defining initial and boundary conditions, and then replacing its own initial data with what we have generated above.
#
# you do not need to edit this cell, but please reach out with questions if you're wondering what it does.
# +
import hydro3
# define a dictionary of arguments that we will pass to the hydrodynamics code specifying the problem to run
args = {'nz':4000,'ut0':3e5,'udens':1e-5,'utslope':0.,'pin':0,'piston_eexp':1e51,'v_piston':1e9,'piston_stop':10,'r_outer':5e13,'rmin':1e7,'t_stop':1e6,'noplot':1}
# define the variable h which is a "lagrange_hydro_1d" object (instance of a class)
h = hydro3.lagrange_hydro_1d(**args)
# variables stored within our object h are accessed by h.variable_name
h.bctype=[h.INFLOW, h.OUTFLOW]
h.itype=h.POWERLAW
h.setup_initial_conditions()
# here we replace the code's initial conditions data with our own
# (no need to edit these lines!)
# number of zones
h.nz = n_zones
# zones.r are the outer interface positions
h.r_inner = r_I[0]/2.
h.zones.r = r_I[1:]
h.zones.dr = r_I[1:]-r_I[:-1]
# v = 0 everywhere initially
h.zones.v = np.zeros_like(h.zones.r)
# density, mass, pressure at zone centers
h.zones.mass = dm
h.zones.mcum = m_zones
h.zones.d = rho_zones
h.zones.p = P_zones
# equation of state to compute u/rho from p
h.zones.e = 1./(h.gamma-1.)*h.zones.p/h.zones.d
# there's no mass inside the inner boundary
h.mass_r_inner = h.zones.mass[0]
# artificial viscosity (ignore for now!)
h.zones.q = hydro3.get_viscosity(h.zones,h.v_inner,h.C_q)
h.initialize_boundary_conditions()
# -
# ### let's run the code and see what happens!
h.run()
# ### 6. make plots of:
# --mass density vs radius (log-log "plt.loglog")
#
# --velocity vs radius (linear-log "plt.semilogx").
#
# Qualitatively, what does it look like has happened? Does this match your expectations? Why or why not?
# +
# plotting code here
# -
# ### 7. finally, measure a few global quantities: $E_{\rm tot}$, $E_k$ ($K$), $E_{\rm int}$ ($U$), $E_{\rm grav}$ ($W$)
#
# What do you think the kinetic energy should be in hydrostatic equilibrium?
#
# Then use the Virial theorem to calculate (or look up or ask about) expected relationships between the thermal (internal) energy and the gravitational energy, and in turn the total (kinetic+gravitational+thermal) energy.
#
# How well does your result agree with expectations? You can measure fractional errors for example as |expected value - numerical value| / |expected value|. When the expected value is zero, you could instead use something like we did above: |left hand side - right hand side| / |left hand side|.
# +
# calculate quantities and errors here
# -
|
lab2/astr3400_lab2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn import metrics
from sklearn.model_selection import train_test_split
df = pd.read_csv("E:/Data Science/Modules/Module 4(ML)/KNN/Data/column_2C_weka.csv")
df
df.head()
df.shape
df.isnull().sum()
df.info()
df.describe()
df["class"].value_counts()
# #### **Data Visualisation**
df.groupby("class").plot(kind = "hist")
df.groupby("class").hist(figsize=(9,9))
# #### **Feature Scaling**
# +
num_col = df.columns[0:6]
x = df[num_col]
y = df["class"]
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
x_new = scaler.fit_transform(x)
x_new
# -
df.head()
# #### **Train test split**
x_train, x_test, y_train, y_test = train_test_split(x_new, y, test_size = 0.3)
print("Training feature set size:",x_train.shape)
print("Test feature set size:",x_test.shape)
print("Training variable set size:",y_train.shape)
print("Test variable set size:",y_test.shape)
# #### **Model fitting & training**
acc_value = [] # we imported accuracy function so, we have to calculate acc_value
for K in range(20):
K=K+1
model = KNeighborsClassifier(n_neighbors = K)
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
accuracy = model.score(x_test, y_test)
acc_value.append(accuracy)
print("Accuracy Score for k= " , K , "is:", accuracy)
curve = pd.DataFrame(acc_value)
curve.plot(kind = "bar", title = "Plotting Accuracy for different K values", legend = False )
# +
final_model = KNeighborsClassifier(n_neighbors = 9)
final_model.fit(x_train, y_train)
y_pred = final_model.predict(x_test)
# -
# ### **Evaluation**
print("Accuracy from sklearn: {0}".format(final_model.score(x_test, y_test)))
#Return the mean accuracy on the given test data and labels.
# +
cnf_matrix = metrics.confusion_matrix(y_test, y_pred, labels = ["Abnormal", "Normal"])
#The confusion matrix lays out the correctly and incorrectly classified cases in a tabular format
# Predicted Positive Predicted Negative
# Actual Positive True Positive False Negative
# Actual Negative False Positive True Negative
print(metrics.classification_report(y_test, y_pred, labels = ["Abnormal", "Normal"]))
# In classification_report (printing the precision and recall, among other metrics)
# +
data = {"y_actual": y_test,
"y_predicted": y_pred}
df_check = pd.DataFrame(data, columns = ["y_actual", "y_predicted"])
# -
df_check.head()
# +
class_names = [0,1]
fig, ax = plt.subplots()
# tick_marks = np.arange(len(class_names))
# plt.xticks(tick_marks, class_names)
# plt.yticks(tick_marks, class_names)
sns.heatmap(pd.DataFrame(cnf_matrix), annot=True) # annot = annot : bool or rectangular dataset, optional,If True, write the data value in each cell.
ax.xaxis.set_label_position("top")
plt.tight_layout # Automatically adjust subplot parameters to give specified padding.
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
# -
|
KNN_Classification.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [Ateliers: Technologies de l'intelligence Artificielle](https://github.com/wikistat/AI-Frameworks)
# <center>
# <a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
# <a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" width=400, style="max-width: 150px; display: inline" alt="Wikistat"/></a>
# <a href="http://www.math.univ-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo_imt.jpg" width=400, style="float:right; display: inline" alt="IMT"/> </a>
#
# </center>
# # Traitement Naturel du Langage (NLP) : Catégorisation de Produits Cdiscount
#
# Il s'agit d'une version simplifiée du concours proposé par Cdiscount et paru sur le site [datascience.net](https://www.datascience.net/fr/challenge). Les données d'apprentissage sont accessibles sur demande auprès de Cdiscount mais les solutions de l'échantillon test du concours ne sont pas et ne seront pas rendues publiques. Un échantillon test est donc construit pour l'usage de ce tutoriel. L'objectif est de prévoir la catégorie d'un produit à partir de son descriptif (*text mining*). Seule la catégorie principale (1er niveau, 47 classes) est prédite au lieu des trois niveaux demandés dans le concours. L'objectif est plutôt de comparer les performances des méthodes et technologies en fonction de la taille de la base d'apprentissage ainsi que d'illustrer sur un exemple complexe le prétraitement de données textuelles.
#
# Le jeux de données complet (15M produits) permet un test en vrai grandeur du **passage à l'échelle volume** des phases de préparation (*munging*), vectorisation (hashage, TF-IDF) et d'apprentissage en fonction de la technologie utilisée.
#
# La synthèse des résultats obtenus est développée par [Besse et al. 2016](https://hal.archives-ouvertes.fr/hal-01350099) (section 5).
# ## Partie 2-1 Catégorisation des Produits Cdiscount avec [Scikit-learn](https://spark.apache.org/docs/latest/ml-guide.html) de <a href="http://spark.apache.org/"><img src="http://spark.apache.org/images/spark-logo-trademark.png" style="max-width: 100px; display: inline" alt="Python"/></a>
# Le principal objectif est de comparer les performances: temps de calcul, qualité des résultats, des principales technologies; ici Python avec la librairie Scikit-Learn. Il s'agit d'un problème de fouille de texte qui enchaîne nécessairement plusieurs étapes et le choix de la meilleure stratégie est fonction de l'étape:
# - Spark pour la préparation des données: nettoyage, racinisaiton
# - Python Scikit-learn pour la transformaiton suivante (TF-IDF) et l'apprentissage notamment avec la régresison logistique qui conduit aux meilleurs résultats.
#
#
# L'objectif est ici de comparer les performances des méthodes et technologies en fonction de la taille de la base d'apprentissage. La stratégie de sous ou sur échantillonnage des catégories qui permet d'améliorer la prévision n'a pas été mise en oeuvre.
#
# * L'exemple est présenté avec la possibilité de sous-échantillonner afin de réduire les temps de calcul.
# * L'échantillon réduit peut encore l'être puis, après "nettoyage", séparé en 2 parties: apprentissage et test.
# * Les données textuelles de l'échantillon d'apprentissage sont, "racinisées", "hashées", "vectorisées" avant modélisation.
# * Les mêmes transformations, notamment (hashage et TF-IDF) évaluées sur l'échantillon d'apprentissage sont appliquées à l'échantillon test.
# * Un seul modèle est estimé par régression logistique "multimodal", plus précisément et implicitement, un modèle par classe.
# * Différents paramètres: de vectorisation (hashage, TF-IDF), paramètres de la régression logistique (pénalisation L1) pourraient encore être optimisés.
# + code_folding=[]
#Importation des librairies utilisées
import unicodedata
import time
import pandas as pd
import numpy as np
import random
import nltk
import collections
import itertools
import csv
import warnings
from sklearn.cross_validation import train_test_split
# -
# ## 1. Importation des données
# Définition du répertoir de travail, des noms des différents fichiers utilisés et des variables globales.
#
# Dans un premier temps, il vous faut télécharger les fichiers `Categorie_reduit.csv` et `lucene_stopwords.txt` disponible dans le corpus de données de [wikistat](http://wikistat.fr/).
#
# Une fois téléchargées, placez ces données dans le repertoire de travail de votre choix et préciser la direction de ce repertoir dans la variable `DATA_DIR`
# + code_folding=[]
# Répertoire de travail
DATA_DIR = ""
# Nom des fichiers
training_reduit_path = DATA_DIR + "data/cdiscount_train.csv.zip"
# Variable Globale
HEADER_TEST = ['Description','Libelle','Marque']
HEADER_TRAIN =['Categorie1','Categorie2','Categorie3','Description','Libelle','Marque']
# + code_folding=[]
## Si nécessaire (première exécution) chargement de nltk, librairie pour la suppression
## des mots d'arrêt et la racinisation
# nltk.download()
# -
# ### Read & Split Dataset
# Fonction permettant de lire le fichier d'apprentissage et de créer deux DataFrame Pandas, un pour l'apprentissage, l'autre pour la validation.
# La première méthode créée un DataFrame en lisant entièrement le fichier. Puis elle scinde le DataFrame en deux grâce à la fonction dédiée de sklearn.
# + code_folding=[]
def split_dataset(input_path, nb_line, tauxValid,columns):
time_start = time.time()
data_all = pd.read_csv(input_path,sep=",",names=columns,nrows=nb_line)
data_all = data_all.fillna("")
data_train, data_valid = train_test_split(data_all, test_size = tauxValid)
time_end = time.time()
print("Split Takes %d s" %(time_end-time_start))
return data_train, data_valid
nb_line=20000 # part totale extraite du fichier initial ici déjà réduit
tauxValid=0.10 # part totale extraite du fichier initial ici déjà réduit
data_train, data_valid = split_dataset(training_reduit_path, nb_line, tauxValid, HEADER_TRAIN)
# Cette ligne permet de visualiser les 5 premières lignes de la DataFrame
data_train.head(5)
# -
# ## 2. Nettoyage des données
# Afin de limiter la dimension de l'espace des variables ou *features*, tout en conservant les informations essentielles, il est nécessaire de nettoyer les données en appliquant plusieurs étapes:
# * Chaque mot est écrit en minuscule.
# * Les termes numériques, de ponctuation et autres symboles sont supprimés.
# * 155 mots-courants, et donc non informatifs, de la langue française sont supprimés (STOPWORDS). Ex: le, la, du, alors, etc...
# * Chaque mot est "racinisé", via la fonction `STEMMER.stem` de la librairie nltk. La racinisation transforme un mot en son radical ou sa racine. Par exemple, les mots: cheval, chevaux, chevalier, chevalerie, chevaucher sont tous remplacés par "cheva".
# ### Importation des librairies et fichier pour le nettoyage des données.
# +
# Librairies
from bs4 import BeautifulSoup #Nettoyage d'HTML
import re # Regex
import nltk # Nettoyage des données
## listes de mots à supprimer dans la description des produits
## Depuis NLTK
nltk_stopwords = nltk.corpus.stopwords.words('french')
## Depuis Un fichier externe.
lucene_stopwords =open(DATA_DIR+"data/lucene_stopwords.txt","r").read().split(",") #En local
## Union des deux fichiers de stopwords
stopwords = list(set(nltk_stopwords).union(set(lucene_stopwords)))
## Fonction de setmming de stemming permettant la racinisation
stemmer=nltk.stem.SnowballStemmer('french')
# -
# ### Fonction de nettoyage de texte
# Fonction qui prend en intrée un texte et retourne le texte nettoyé en appliquant successivement les étapes suivantes: Nettoyage des données HTML, conversion en texte minuscule, encodage uniforme, suppression des caractéres non alpha numérique (ponctuations), suppression des stopwords, racinisation de chaque mot individuellement.
# + code_folding=[]
# Fonction clean générale
def clean_txt(txt):
### remove html stuff
txt = BeautifulSoup(txt,"html.parser",from_encoding='utf-8').get_text()
### lower case
txt = txt.lower()
### special escaping character '...'
txt = txt.replace(u'\u2026','.')
txt = txt.replace(u'\u00a0',' ')
### remove accent btw
txt = unicodedata.normalize('NFD', txt).encode('ascii', 'ignore').decode("utf-8")
###txt = unidecode(txt)
### remove non alphanumeric char
txt = re.sub('[^a-z_]', ' ', txt)
### remove french stop words
tokens = [w for w in txt.split() if (len(w)>2) and (w not in stopwords)]
### french stemming
tokens = [stemmer.stem(token) for token in tokens]
### tokens = stemmer.stemWords(tokens)
return ' '.join(tokens)
def clean_marque(txt):
txt = re.sub('[^a-zA-Z0-9]', '_', txt).lower()
return txt
# -
# ### Nettoyage des DataFrames
# Applique le nettoyage sur toutes les lignes de la DataFrame
# + code_folding=[]
# fonction de nettoyage du fichier(stemming et liste de mots à supprimer)
def clean_df(input_data, column_names= ['Description', 'Libelle', 'Marque']):
#Test if columns entry match columns names of input data
column_names_diff= set(column_names).difference(set(input_data.columns))
if column_names_diff:
warnings.warn("Column(s) '"+", ".join(list(column_names_diff)) +"' do(es) not match columns of input data", Warning)
nb_line = input_data.shape[0]
print("Start Clean %d lines" %nb_line)
# Cleaning start for each columns
time_start = time.time()
clean_list=[]
for column_name in column_names:
column = input_data[column_name].values
if column_name == "Marque":
array_clean = np.array(list(map(clean_marque,column)))
else:
array_clean = np.array(list(map(clean_txt,column)))
clean_list.append(array_clean)
time_end = time.time()
print("Cleaning time: %d secondes"%(time_end-time_start))
#Convert list to DataFrame
array_clean = np.array(clean_list).T
data_clean = pd.DataFrame(array_clean, columns = column_names)
return data_clean
# -
# Take approximately 2 minutes fors 100.000 rows
data_valid_clean = clean_df(data_valid)
data_train_clean = clean_df(data_train)
# Affiche les 5 premières lignes de la DataFrame d'apprentissage après nettoyage.
data_train_clean.head(5)
# ## 3 Construction des caractéristiques ou *features* (TF-IDF)¶
# ### Introduction
# La vectorisation, c'est-à-dire la construction des caractéristiques à partir de la liste des mots se fait en 2 étapes:
# * **Hashage**. Il permet de réduire l'espace des variables (taille du dictionnaire) en un nombre limité et fixé a priori `n_hash` de caractéristiques. Il repose sur la définition d'une fonction de hashage, $h$ qui à un indice $j$ défini dans l'espace des entiers naturels, renvoie un indice $i=h(j)$ dans dans l'espace réduit (1 à n_hash) des caractéristiques. Ainsi le poids de l'indice $i$, du nouvel espace, est l'association de tous les poids d'indice $j$ tels que $i=h(j)$ de l'espace originale. Ici, les poids sont associés d'après la méthode décrite par Weinberger et al. (2009).
#
# N.B. $h$ n'est pas généré aléatoirement. Ainsi pour un même fichier d'apprentissage (ou de test) et pour un même entier n_hash, le résultat de la fonction de hashage est identique
#
# * **TF-IDF**. Le TF-IDF permet de faire ressortir l'importance relative de chaque mot $m$ (ou couples de mots consécutifs) dans un texte-produit ou un descriptif $d$, par rapport à la liste entière des produits. La fonction $TF(m,d)$ compte le nombre d'occurences du mot $m$ dans le descriptif $d$. La fonction $IDF(m)$ mesure l'importance du terme dans l'ensemble des documents ou descriptifs en donnant plus de poids aux termes les moins fréquents car considérés comme les plus discriminants (motivation analogue à celle de la métrique du chi2 en anamlyse des correspondance). $IDF(m,l)=\log\frac{D}{f(m)}$ où $D$ est le nombre de documents, la taille de l'échantillon d'apprentissage, et $f(m)$ le nombre de documents ou descriptifs contenant le mot $m$. La nouvelle variable ou *features* est $V_m(l)=TF(m,l)\times IDF(m,l)$.
#
# * Comme pour les transformations des variables quantitatives (centrage, réduction), la même transformation c'est-à-dire les mêmes pondérations, est calculée sur l'achantillon d'apprentissage et appliquée à celui de test.
# ### Fonction de Vectorisation
# +
## Création d’une matrice indiquant
## les fréquences des mots contenus dans chaque description
## de nombreux paramètres seraient à tester
from sklearn.feature_extraction.text import TfidfVectorizer, TfidfTransformer
from sklearn.feature_extraction import FeatureHasher
def vectorizer_train(df, columns=['Description', 'Libelle', 'Marque'], nb_hash=None, stop_words=None):
# Hashage
if nb_hash is None:
data_hash = map(lambda x : " ".join(x), df[columns].values)
feathash = None
# TFIDF
vec = TfidfVectorizer(
min_df = 1,
stop_words = stop_words,
smooth_idf=True,
norm='l2',
sublinear_tf=True,
use_idf=True,
ngram_range=(1,2)) #bi-grams
tfidf = vec.fit_transform(data_hash)
else:
df_text = map(lambda x : collections.Counter(" ".join(x).split(" ")), df[columns].values)
feathash = FeatureHasher(nb_hash)
data_hash = feathash.fit_transform(map(collections.Counter,df_text))
vec = TfidfTransformer(use_idf=True,
smooth_idf=True, sublinear_tf=False)
tfidf = vec.fit_transform(data_hash)
return vec, feathash, tfidf
def apply_vectorizer(df, vec, columns =['Description', 'Libelle', 'Marque'], feathash = None ):
#Hashage
if feathash is None:
data_hash = map(lambda x : " ".join(x), df[columns].values)
else:
df_text = map(lambda x : collections.Counter(" ".join(x).split(" ")), df[columns].values)
data_hash = feathash.transform(df_text)
# TFIDF
tfidf=vec.transform(data_hash)
return tfidf
# +
vec, feathash, X = vectorizer_train(data_train_clean, nb_hash=60)
Y = data_train['Categorie1'].values
Xv = apply_vectorizer(data_valid_clean, vec, feathash=feathash)
Yv=data_valid['Categorie1'].values
# -
# ## 4. Modélisation et performances
# Regression Logistique
## estimation
from sklearn.linear_model import LogisticRegression
cla = LogisticRegression(C=100)
cla.fit(X,Y)
score=cla.score(X,Y)
print('# training score:',score)
## erreur en validation
scoreValidation=cla.score(Xv,Yv)
print('# validation score:',scoreValidation)
#Méthode CART
from sklearn import tree
clf = tree.DecisionTreeClassifier()
time_start = time.time()
clf = clf.fit(X, Y)
time_end = time.time()
print("CART Takes %d s" %(time_end-time_start) )
score=clf.score(X,Y)
print('# training score :',score)
scoreValidation=clf.score(Xv,Yv)
print('# validation score :',scoreValidation)
# Random forest
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100,n_jobs=-1,max_features=24)
time_start = time.time()
rf = rf.fit(X, Y)
time_end = time.time()
print("RF Takes %d s" %(time_end-time_start) )
score=rf.score(X,Y)
print('# training score :',score)
scoreValidation=rf.score(Xv,Yv)
print('# validation score :',scoreValidation)
|
NatualLangageProcessing/Part2-1-AIF-PythonWorkflow-Cdiscount.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" colab={} colab_type="code" id="mU7Kb8-6Qs9m"
# Common imports
import numpy as np
import os
from backend import import_excel, export_excel
# To plot pretty figures
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.style as style
# style.use('bmh')
from mpl_toolkits.mplot3d import Axes3D
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
import pandas as pd
import seaborn as sns
import tensorflow as tf
import keras
import random
import sys
sys.path.append("..")
import dataset, network, GPR_Model, prob_dist
import WGAN_Model
# + [markdown] colab_type="text" id="37tykamPQs9o"
# # Load data
# +
scenario = "moons"
n_instance = 1000 # number of generated points
n_features = 2
if scenario in ("3d", "helix") :
X_train, y_train, X_test, y_test, X_valid, y_valid = dataset.get_dataset(n_instance, scenario)
else:
X_train, y_train, X_test, y_test, X_valid, y_valid = dataset.get_dataset(n_instance, scenario)
os.system('mkdir Dataset')
os.system('mkdir GANS')
os.system('mkdir GANS/Models')
os.system('mkdir GANS/Losses')
os.system('mkdir GANS/Random_test')
export_excel(X_train, 'Dataset/X_train')
export_excel(y_train, 'Dataset/y_train')
# print(X_train.shape,y_train.shape)
X_train = import_excel('Dataset/X_train')
y_train = import_excel('Dataset/y_train')
print('made dataset')
# Preprocessing
vars = np.zeros((6,864))
j = 0
for i in range(6):
for i2 in range(4):
for i3 in range(3):
for i4 in range(2):
for i5 in range(3):
for i6 in range(2):
vars[0,j]=i+2
vars[1,j]=i2
vars[2,j]=i3
vars[3,j]=i4
vars[4,j]=i5
vars[5,j]=i6
j = j +1
j = 0#int(sys.argv[1])-1
print(vars[:,j])
n_features = 2
n_var =int(vars[0,j])
latent_spaces = [3,10,50,100]
latent_space = 3#int(latent_spaces[int(vars[1,j])])
batchs = [10,100,1000]
BATCH_SIZE = 100#int(batchs[int(vars[2,j])])
scales = ['-1-1','0-1']
scaled = '-1-1'#scales[int(vars[3,j])]
epochs = 5001 #[1000,10000,10000]
# epoch = int(epochs[int(vars[4,j])])
bias = [True,False]
use_bias = True#(bias[int(vars[5,j])])
# -
# # WGAN
# ### Preprocessing
wgan = WGAN_Model.WGAN(n_features,latent_space,BATCH_SIZE,n_var,use_bias)
train_dataset, scaler, X_train_scaled = wgan.preproc(X_train, y_train, scaled)
hist = wgan.train(train_dataset, epochs, scaler, scaled, X_train, y_train)
wgan.generator.save('GANS/Models/GAN_'+str(j))
# plot loss
print('Loss: ')
fig, ax = plt.subplots(1,1, figsize=[10,5])
ax.plot(hist)
ax.legend(['loss_gen', 'loss_disc'])
#ax.set_yscale('log')
ax.grid()
plt.tight_layout()
plt.savefig('GANS/Losses/GANS_loss'+str(j)+'.png')
generator = keras.models.load_model('GANS/Models/GAN_'+str(j))
plt.close()
# ### Prediction
# +
latent_values = tf.random.normal([1000, latent_space], mean=0.0, stddev=0.1)
predicted_values = wgan.generator.predict(latent_values)
if scaled == '-1-1':
predicted_values[:,:]=(predicted_values[:,:])
predicted_values = scaler.inverse_transform(predicted_values)
elif scaled =='0-1':
predicted_values = scaler.inverse_transform(predicted_values)
plt.plot(X_train,y_train,'o')
plt.plot(predicted_values[:,0],predicted_values[:,1],'o')
plt.show()
np.shape(latent_values)
np.shape(predicted_values)
# -
train_dataset
# +
x_input = [-1, 0, 0.5, 1.5]
n_points = 80
y_min = -0.75
y_max = 1
# produces an input of fixed x coordinates with random y values
predict1 = np.full((n_points//4, 2), x_input[0])
predict2 = np.full((n_points//4, 2), x_input[1])
predict3 = np.full((n_points//4, 2), x_input[2])
predict4 = np.full((n_points//4, 2), x_input[3])
predictthis = np.concatenate((predict1, predict2, predict3, predict4))
for n in range(n_points):
predictthis[n,1] = random.uniform(y_min, y_max)
predictthis_scaled = scaler.transform(predictthis)
input_test = predictthis_scaled.reshape(-1, n_features).astype('float32')
print(input_test)
# +
mse = tf.keras.losses.MeanSquaredError()
optimizer = tf.keras.optimizers.Adam(1e-2)
def mse_loss(inp, outp):
"""
Calculates the MSE loss between the x-coordinates
"""
inp = tf.reshape(inp, [-1, n_features])
outp = tf.reshape(outp, [-1, n_features])
return mse(inp[:,0], outp[:,0])
def opt_step(latent_values, real_coding):
"""
Minimizes the loss between generated point and inputted point
"""
with tf.GradientTape() as tape:
tape.watch(latent_values)
gen_output = wgan.generator(latent_values, training=False)
loss = mse_loss(real_coding, gen_output)
gradient = tape.gradient(loss, latent_values)
optimizer.apply_gradients(zip([gradient], [latent_values]))
return loss
def optimize_coding(real_coding):
"""
Optimizes the latent space values
"""
latent_values = tf.random.normal([1, latent_space], mean=0.0, stddev=0.1)
latent_values = tf.Variable(latent_values)
loss = np.array()
while
# for epoch in range(500):
loss.append(opt_step(latent_values, real_coding).numpy())
return latent_values
def predict(input_data, scaler, scaled):
"""
Optimizes the latent space of the input then produces a prediction from
the generator.
"""
predicted_vals = np.zeros((1, n_features))
for n in range(len(input_data)):
print("Optimizing latent space for point ", n, " / ", len(input_data))
real_coding = input_data[n].reshape(1, n_features)
real_coding = tf.constant(real_coding)
real_coding = tf.cast(real_coding, dtype=tf.float32)
print(real_coding)
latent_values = optimize_coding(real_coding)
print(latent_values)
#predicted_vals.append(scaler.inverse_transform(wgan.generator.predict(tf.convert_to_tensor(latent_values)).reshape(1,n_features)))
predicted_vals_1 = scaler.inverse_transform((wgan.generator.predict(tf.convert_to_tensor(latent_values)).reshape(1, n_features)))
# predicted_vals_1 = predicted_vals_1.reshape(1, self.n_features)
predicted_vals = np.concatenate((predicted_vals, predicted_vals_1), axis=0)
predicted_vals = predicted_vals[1:,:]
return predicted_vals
# -
X_generated = predict(input_test, scaler, scaled)
print(X_generated)
plt.title("Prediction at x = -1, 0, 1.5")
plt.scatter(X_train, y_train, label="Training data")
#plt.scatter(predictthis[:,0], predictthis[:,1], label="Sample data", c="pink")
plt.scatter(X_generated[:,0], X_generated[:,1], label="Fixed Input Prediction")
plt.legend(loc='upper right')
plt.tight_layout()
plt.xlabel("x")
plt.ylabel("y")
|
Toby/Fixed_Input.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
sc
file = sc.textFile('README.md')
counts = file.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1)).reduceByKey(lambda v1, v2: v1+v2)
counts.collect()
|
example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#https://medium.com/coinmonks/how-to-get-images-from-imagenet-with-python-in-google-colaboratory-aeef5c1c45e5
# +
from bs4 import BeautifulSoup
import numpy as np
import requests
import cv2
import PIL.Image
import urllib
import matplotlib.pyplot as plt
import numpy as np
import sys
# %matplotlib inline
import nltk
nltk.download('wordnet')
from nltk.corpus import wordnet as wn
import os
# +
path = '/Users/tyler/Desktop/dissertation/programming/tcav/test_examples/zebra/img99.jpg'
os.path.exists(path)
# -
# ## Get Image URLs
# +
offset = 2374451
page = requests.get("http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n0" + str(offset))
soup = BeautifulSoup(page.content, 'html.parser')
str_soup = str(soup)
split_urls=str_soup.split('\r\n')
print(len(split_urls))
# -
# +
#split_urls[0]
# -
def url_to_image(url):
resp = urllib.request.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
return image
# +
#url = split_urls[0]
#resp = urllib.request.urlopen(url)
# -
path = '/Users/tyler/Desktop/dissertation/programming/tcav/test_examples/zebra/img99.jpg'
im = cv2.imread(path)
print (type(im))
try:
im = url_to_image(split_urls[6])
h,w = im.shape[:2]
print(im.shape)
plt.imshow(im,cmap='gray')
plt.show()
except:
print('Error getting image')
# +
## Write to files
this_dir = '/Users/tyler/Desktop/dissertation/programming/tcav/concepts/zebra2'
img_rows, img_cols = 32, 32
input_shape = (img_rows, img_cols, 3)
n_of_training_images = 10000
for progress in range(n_of_training_images):
if(progress%20==0):
print(progress)
if not split_urls[progress] == None:
try:
I = url_to_image(split_urls[progress])
if len(I.shape) == 3:
save_path = os.path.join(this_dir,'img' + str(progress) + '.jpg')
#print(save_path)
cv2.imwrite(save_path,I)
except:
pass
#print('Error getting image')
# +
#save_path
# -
# ## Dealing with labels
zebra = wn.synset_from_pos_and_offset('n', 2391049)
print(zebra)
this_concept = wn.synsets('horse')[0]
print(this_concept)
wn.synsets('horse')
this_concept.offset()
test = wn.synsets('stripe')
print(test)
test[0].offset()
# +
#x = striped[4]
# -
x.offset()
s01788445
# +
num = 2784732
wnid = 'n0' + str(num)
wnid = 'n02784998'
url = 'http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=' + wnid
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
str_soup = str(soup)
split_urls=str_soup.split('\r\n')
print(len(split_urls))
# -
split_urls
# ## From tar file
import glob
import tarfile
def untar(fname, targetd_dir):
with tarfile.open(fname) as tar:
tar.extractall(path=targetd_dir)
images_dir = '/Users/tyler/Desktop/dissertation/programming/tcav/images/ILSVRC2013_DET_val.tar'
target_dir = '/Users/tyler/Desktop/dissertation/programming/tcav/images/ILSVRC2013_val_extracted'
# +
files = glob.glob(images_dir)
for f in files:
untar(f, target_dir)
# -
|
old_notebooks/getting_images_azure.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import Libraries
import cv2
import numpy as np
# ### Capture Video Stream
cap = cv2.VideoCapture('Video/chaplin.mp4')
# ### Take first frame of the video
ret, frame = cap.read()
# ### Set up the initial tracking window
face_casc = cv2.CascadeClassifier('Haarcascade\haarcascade_frontalface_default.xml')
face_rects = face_casc.detectMultiScale(frame)
# ### Convert the list to a tuple
face_x, face_y, w,h =tuple(face_rects[0])
track_window = (face_x, face_y, w, h)
# ### set up the ROI for tracking
roi = frame[face_y:face_y+h,
face_x:face_x+w]
# ### HSV color maping
hsv_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
# ### Histogram to target on each frame for the meanshift calculation
roi_hist = cv2.calcHist([hsv_roi],
[0],
None,
[180],
[0,180])
# ### Normalize the histogram
cv2.normalize(roi_hist,
roi_hist,
0,
255,
cv2.NORM_MINMAX);
# ### Set the termination criteria
# 10 iterations or move 1 pt
term_crit = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 1)
# ### It's a Kind of Magic
# +
# While loop
while True:
# capture video
ret , frame = cap.read()
# if statement
if ret == True:
# Frame in HSV
hsv = cv2.cvtColor(frame,
cv2.COLOR_BGR2HSV)
# Calculate the base of ROI
dest_roi = cv2.calcBackProject([hsv],
[0],
roi_hist,
[0,180],
1)
# Meanshift to get the new coordinates of rectangle
ret , track_window = cv2.meanShift(dest_roi,
track_window,
term_crit)
# Draw new rectangle on image
x,y,w,h = track_window
# Open new window and display
img2 = cv2.rectangle(frame, (x,y),(x+w,y+h), (255,255,0), 3)
cv2.imshow('FaceTracker',img2)
# Close window
if cv2.waitKey(50) & 0xFF == ord('q'):
break
# else statement
else:
break
# Release and Destroy
cap.release()
cv2.destroyAllWindows()
# -
|
03_MeanShift_Tracking.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Prepare ATMOS file for REF ET
# ### Import needed libraries
import pandas as pd
import numpy as np
import refet
import pytz
# +
# where is the file coming from? A CSI Logger? ZentraCloud? Decagon Webviewer? ZentraUtility? ECH2O Utility?
# some other custom format? We need to have sensible defaults to make this as easy as possible.
#filename = input("Em60 Data file: ")
dfp1 = pd.read_excel("/home/acampbell/Public/06-00173 24May18-1146.xlsx",
header=[0,1,2],
#index_col=0,
)
dfp2 = pd.read_excel("/home/acampbell/Public/06-00173 30May18-0956.xlsx",
header=[0,1,2],
#index_col=0,
)
df3 = pd.read_excel("/home/acampbell/Public/06-00173 01Jun18-1242.xlsx",
header=[0,1,2],
#index_col=0,
)
df = dfp1.append(dfp2.append(df3))
# -
# ### Cut the needed columns
result = df['Port 1']['ATMOS 41 All-in-one Weather Station'][
[
'W/m² Solar Radiation',
'm/s Wind Speed',
'RH Relative Humidity',
'mm Precipitation',
'°C Air Temperature',
]
]
# ### Bin data to be hourly
hourly = result.resample('H').mean()
hourly['mm Precipitation'] = result['mm Precipitation'].resample('H').sum()
# +
#result['Time'] = result.index.time
#result['Date'] = result.index.date
#### REF ET program requires times to be in hhmm format
#result['Time']=result['Time'].apply(str).apply(lambda x: x[0:2]+x[3:5])
#### REF ET program requires dates to be in yyyymmdd format
#result['Date']=result['Date'].apply(str).apply(lambda x: x[0:4]+x[5:7]+x[8:10])
### Output the CSV file
#result.to_csv("Public/21may18 ATMOS.csv",sep=",",index=False,)
# -
# ## Prepare data for use with refet python library
# +
## ea -- VP (array) CONVERT FROM RH
# RH = vp/vps, where vp is vapor pressure, and vps is saturation vapor pressure.
# vps can be found using Teten's formula.
# P = 0.61078 exp(17.27*T/(T+237.3)) for T over 0 C
hourly['VP kPa'] = hourly['RH Relative Humidity'] * (
0.61078 ** (
17.27*hourly['°C Air Temperature']/(
hourly['°C Air Temperature'] + 237.3)
)
)
ea = hourly['VP kPa']
## rs -- Solar Radiation (array) UNIT MISMATCH, use input_units dict
rs = hourly['W/m² Solar Radiation']
## uz -- wind speed (array)
uz = hourly['m/s Wind Speed']
## zw -- wind speed height (float)
zw = 2 # 2m
## elev -- elevation (array)
hourly['elev'] = 1408
elev = hourly['elev'] # 1408m above sea level
## lat -- latitude in degrees (array)
hourly['lat'] = 40.246025
lat = hourly['lat']
## doy -- day of year (array)
# This can be calculated using Pandas.
doy = hourly.index.dayofyear
## tmean -- average temperature (array)
tmean = hourly['°C Air Temperature']
## lon -- longitude in degrees (array)
hourly['lon'] = 111.64134
lon = hourly['lon']
## time -- UTC hour at start of time period (array)
time = hourly.index.tz_localize("America/Denver").tz_convert(None).hour
## method -- 'refet' to follow RefET software
method = 'refet'
## input_units -- {'rs':'w/m2'} Everything else mathces the default units
input_units = {'rs':'w/m2'}
# -
hourly['ETo mm/hr'] = refet.Hourly(
tmean=tmean, ea=ea, rs=rs, uz=uz, zw=zw, elev=elev,
lat=lat, lon=lon, doy=doy, time=time, method=method,
input_units=input_units).eto()
hourly['ETo mm/hr']
# # Sum ET output per day
daily_totals = hourly['ETo mm/hr']['2018-05-25':'2018-06-01'].resample('D').mean()*24
print(daily_totals,'\n',hourly['ETo mm/hr']['2018-05-25':'2018-06-01'].resample('D').sum())
daily_totals['2018-05-29':'2018-05-31'].mean()
|
prep_ATMOS_for_REF_ET.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # First Order System in Pyomo
# ## First-Order Differential Equation with Initial Condition
#
# The following cell implements a solution to a first-order linear model in the form
#
# \begin{align}
# \tau\frac{dy}{dt} + y & = K u(t) \\
# \end{align}
#
# where $\tau$ and $K$ are model parameters, and $u(t)$ is an external process input.
# +
# %matplotlib inline
from pyomo.environ import *
from pyomo.dae import *
import matplotlib.pyplot as plt
from pyomo.common.config import (ConfigDict, ConfigList, ConfigValue, In)
tf = 10
tau = 1
K = 5
# define u(t)
u = lambda t: 0 if t < 5 else 1
# create a model object
model = ConcreteModel()
# define the independent variable
model.t = ContinuousSet(bounds=(0, tf))
# define the dependent variables
model.y = Var(model.t)
model.dydt = DerivativeVar(model.y)
# fix the initial value of y
model.y[0].fix(0)
# define the differential equation as a constraint
model.ode = Constraint(model.t,
rule=lambda model, t: tau*model.dydt[t] + model.y[t] == K*u(t))
# transform dae model to discrete optimization problem
TransformationFactory('dae.finite_difference').apply_to(model, nfe=50, method='BACKWARD')
# solve the model
SolverFactory('ipopt').solve(model).write()
# access elements of a ContinuousSet object
t = [t for t in model.t]
# access elements of a Var object
y = [model.y[t]() for t in model.y]
plt.plot(t,y)
plt.xlabel('time / sec')
plt.ylabel('response')
plt.title('Response of a linear first-order ODE')
# -
|
Mathematics/Mathematical Modeling/07.05-First-Order-System-in-Pyomo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
def requests_retry_session(
retries=3,
backoff_factor=0.3,
status_forcelist=(500, 502, 504),
session=None,
):
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
# +
session = requests_retry_session()
features = []
def get_data(page):
data = {
"page": page,
"page_size": 500,
}
response = session.get('https://openapparel.org/api/facilities/', params=data)
return response.json()
do_stuff = True
page = 1
while do_stuff:
response = get_data(page)
features.extend(response["features"])
if response["next"] != None:
page +=1
else:
do_stuff = False
# -
len(features)
features[0]
# +
processed_data = []
for d in features:
newcomer = {
"brand": "UNKNOWN",
"company_type": "UNKONWN",
"country": d["properties"]["country_name"],
"name": d["properties"]["name"],
"lat": d["geometry"]["coordinates"][0],
"lon": d["geometry"]["coordinates"][1],
}
processed_data.append(newcomer)
import json
with open("data/open_apparrel_locations.json", "w") as f:
data = {
"factories": processed_data,
}
f.write(json.dumps(data))
# -
|
datagathering/Collect Open Apparel data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Filtrado clásico
#
#
# ## Ejemplo de diseño
#
# Diseñe un circuito pasa bajos adaptado en generador y en carga a 50$$\Omega$$ y frecuencia de corte normalizada de 1 $$\frac{rad}{s}$$. Utilice una celda T.
#
# ### Diseño celda k-constante
#
# <img src="filtrado_clasico/kconstante.png" />
#
# $$Z_{0T} = \sqrt{\frac{L}{C}}\sqrt{1 - \frac{4w^2}{LC}}$$
#
# $$Z_{0} = \sqrt{\frac{L}{C}} $$
#
# $$w_{C}^2 = \frac{4}{LC} $$
#
# ### Diseño celda m-derivado
#
# <img src="filtrado_clasico/celda_m_derivada.png" />
#
# $$ L_{p} = m L $$
#
# $$ C_{p} = m C $$
#
# $$ L_{2} = L \frac{1-m^2}{4m} $$
#
# $$ m \in (0;1) $$
#
# $$w_{m}^2 = \frac{4}{LC(1-m^2)}$$
# ## Calculadora
import numpy as np
import matplotlib.pyplot as plt
# ### Celda k-constante
# +
Z_0 = 50 # ohm
w_c = 1 # rad/s
L = 2 * Z_0 / w_c
C = 2 / (Z_0 * w_c)
# -
print('L: {}'.format(L))
print('C: {}'.format(C))
# ### Celda m-derivada
# +
w_m = 1.1 * w_c
m = np.sqrt(1 - (w_c**2) / (w_m**2))
# -
print('m: {}'.format(m))
L_p = m * L
C_p = m * C
L_2 = L * (1 - m**2) / (4 * m)
print('L_p: {}'.format(L_p))
print('C_p: {}'.format(C_p))
print('L_2: {}'.format(L_2))
# ### Celda m-derivada de adaptacion
m = 0.6
L_p = m * L
C_p = m * C
L_2 = L * (1 - m**2) / (4 * m)
w_m = w_c / np.sqrt(1 - m**2)
print('L_p / 2: {}'.format(L_p / 2))
print('C_p / 2: {}'.format(C_p / 2))
print('2* L_2: {}'.format(L_2 * 2))
print('w_m: {}'.format(w_m))
w_m / 2 / np.pi
# ## Graficas
# Parte real
def z_ot_real_lp(l, c, w):
return np.sqrt(l/c - ((w * L)**2) / 4)
w = np.linspace(0, w_c, 100, endpoint=True)
Z_ot = [z_ot_real_lp(L, C, w_i) for w_i in w]
fig = plt.figure(figsize=(20,10))
plt.plot(w, Z_ot)
plt.grid()
plt.title('Z_ot(w)')
plt.ylabel('Z_ot')
plt.xlabel('w')
plt.hlines(Z_0, np.min(w), np.max(w), colors='r', linestyles='dashed')
def z_opi_n(l, c, w, m=1):
return np.sqrt( (l / (4*c)) * (4- (1-m**2)*l*c* (w**2))**2 / (4-l*c*(w**2)))
m = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
Z_ot = dict({})
for m_i in m:
Z_ot[m_i] = [z_opi_n(L, C, w_i, m=m_i) for w_i in w]
for i in range(0, len(Z_ot[m_i])):
Z_ot[m_i][i] = Z_ot[m_i][i] if not np.isnan(Z_ot[m_i][i]) else 0
fig = plt.figure(figsize=(20,10))
for m_i, z_i in Z_ot.items():
plt.plot(w, z_i, label='m={}'.format(m_i))
plt.grid()
plt.title('Z(w,m)')
plt.ylabel('Z(w)')
plt.xlabel('w')
plt.hlines(Z_0, np.min(w), np.max(w), colors='r', linestyles='dashed')
plt.ylim([0, 200])
plt.legend()
fig = plt.figure(figsize=(20,10))
plt.plot(w, Z_ot[0.6])
plt.grid()
plt.title('Z(w,0.6)')
plt.ylabel('Z(w)')
plt.xlabel('w')
plt.ylim([0,100])
plt.hlines(Z_0, np.min(w), np.max(w), colors='r', linestyles='dashed')
plt.legend()
# ## Circuito final
#
# <img src='filtrado_clasico/circuito.png' />
#
# <img src='filtrado_clasico/transferencia.png' />
|
Repo-Ayudante/notebooks/filtrado_clasico.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fremont Bridge - Bicycle Counts - Number of bikes that cross the bridge
# ## From exploratory analysis to reproducible science
# ### <NAME>
# * In this document, I will carry out the analysis of the number of people who pass through the Fremont Bridge by bike. I will determine the variants of the days as well as the two entrances of the bridge by East and West. The database will be obtained from the Official Seattle page and I will make all the modifications to prepare the DateFrame and to be able to locate the existing patterns.
#
# +
import pandas as pd
import matplotlib.pyplot as plt
# url = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
data = pd.read_csv('../Data/Fremont_Bridge.csv', index_col='Date', parse_dates=True)
data
# -
# * We can see that some patterns are found over the years. You can think of a hypothesis in the decrease during the end and beginning of the year.
plt.style.use('seaborn')
data.columns = ['West', 'East', 'Total']
data.resample('W').sum().plot();
# * Here we can identify that there is a difference when graphing per day.
ax = data.resample('D').sum().rolling(365).sum().plot()
ax.set_ylim(0, None);
# * Now with this we can identify the hours in which people are spending more fluently.
# * Identifying and assuming that the hours are when they travel to or from work and school.
data.groupby(data.index.time).mean().plot()
plt.show()
# * Being a bit rough in this next graph we can quickly determine the growth over the years in the population flow.
# * We can assume that it may be due to population growth but it would be a hypothesis and of which we will cover in other research and I will attach these updated data.
data.groupby(data.index.year).mean().plot();
# * For the following table, we determine the time and date to determine the total number of people using the bridge per day, with specific hours during all years on average.
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.iloc[:5,:5]
# * With the previous table, I go to graph each day of the year represents a line in the plot, so we can identify that on weekdays we have two major uses of the bridge by the population, and the less highlighted parts below that can be seen They are holidays and weekends.
pivoted.plot(legend=False, alpha=0.04);
# ### With this I give as finished this small demonstration of obtaining, cleaning, handling, data ploting.
# #### With information so you can know people's patterns, product prices, make statistical predictions, companies plan events at strategic points as well as many other things.
#
|
Bridge_Bicycle_Count.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df = pd.read_csv("D:\\newproject\\New folder\\Barley.data.csv")
df1 = pd.read_csv("D:\\newproject\\New folder\\Chickpea.data.csv")
df2 = pd.read_csv("D:\\newproject\\New folder\\Sorghum.data.csv")
#Na Handling
df.isnull().values.any()
#Na Handling
df1.isnull().values.any()
#Na Handling
df2.isnull().values.any()
df2.isnull().sum()*100/df2.shape[0]
df6=df2.dropna()
df6.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=0.8,
random_state=42)
X = df1.drop(['Predictor'], axis=1)
X_col = X.columns
y = df1['Predictor']
from sklearn import preprocessing
normalized_X=preprocessing.scale(X)
normalized_X.head()
df15 = pd.DataFrame(normalized_X)
df15.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(normalized_X, y,
train_size=0.8,
random_state=42,stratify = y)
from sklearn.model_selection import cross_val_predict, cross_val_score
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn import svm
clf = svm.SVC(kernel="linear")
clf.fit(X_train,y_train)
print(accuracy_score(y_train, clf.predict(X_train)))
#print(accuracy_score(y_test, clf.predict(X_test))
from sklearn.ensemble import RandomForestClassifier
clf1 = RandomForestClassifier(random_state=42)
clf1.fit(X_train,y_train)
print(accuracy_score(y_train, clf1.predict(X_train)))
def print_score(clf, X_train, y_train, X_test, y_test, train=True):
if train:
print("Train Result:\n")
print("accuracy score: {0:.4f}\n".format(accuracy_score(y_train, clf.predict(X_train))))
print("Classification Report: \n {}\n".format(classification_report(y_train, clf.predict(X_train))))
print("Confusion Matrix: \n {}\n".format(confusion_matrix(y_train, clf.predict(X_train))))
res = cross_val_score(clf, X_train, y_train, cv=10, scoring='accuracy')
print("Average Accuracy: \t {0:.4f}".format(np.mean(res)))
print("Accuracy SD: \t\t {0:.4f}".format(np.std(res)))
elif train==False:
print("Test Result:\n")
print("accuracy score: {0:.4f}\n".format(accuracy_score(y_test, clf.predict(X_test))))
print("Classification Report: \n {}\n".format(classification_report(y_test, clf.predict(X_test))))
print("Confusion Matrix: \n {}\n".format(confusion_matrix(y_test, clf.predict(X_test))))
print_score(clf1, X_train, y_train, X_test, y_test, train=True)
print_score(clf1, X_train, y_train, X_test, y_test, train=False)
print_score(clf, X_train, y_train, X_test, y_test, train=True)
print_score(clf, X_train, y_train, X_test, y_test, train=False)
import xgboost
import xgboost as xgb
clf2 = xgb.XGBClassifier()
clf2.fit(normalized_X, y)
print(accuracy_score(y, clf2.predict(normalized_X)))
# +
# Importing decision tree classifier from sklearn library
from sklearn.tree import DecisionTreeClassifier
# Fitting the decision tree with default hyperparameters, apart from
# max_depth which is 5 so that we can plot and read the tree.
dt_default = DecisionTreeClassifier()
dt_default.fit(X_train, y_train)
# -
print_score(dt_default, X_train, y_train, X_test, y_test, train=True)
print_score(dt_default, X_train, y_train, X_test, y_test, train=False)
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=8,
p=2, metric='minkowski')
knn.fit(X_train, y_train)
print_score(knn, X_train, y_train, X_test, y_test, train=True)
print_score(knn, X_train, y_train, X_test, y_test, train=False)
from sklearn.ensemble import AdaBoostClassifier
abc = AdaBoostClassifier(random_state=15)
abc.fit(X_train,y_train)
print_score(abc, X_train, y_train, X_test, y_test, train=True)
print_score(knn, X_train, y_train, X_test, y_test, train=False)
|
cropClassifier.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # GPyTorch Classification Tutorial
# ## Introduction
#
# This example is the simplest form of using an RBF kernel in an `AbstractVariationalGP` module for classification. This basic model is usable when there is not much training data and no advanced techniques are required.
#
# In this example, we’re modeling a unit wave with period 1/2 centered with positive values @ x=0. We are going to classify the points as either +1 or -1.
#
# Variational inference uses the assumption that the posterior distribution factors multiplicatively over the input variables. This makes approximating the distribution via the KL divergence possible to obtain a fast approximation to the posterior. For a good explanation of variational techniques, sections 4-6 of the following may be useful: https://www.cs.princeton.edu/courses/archive/fall11/cos597C/lectures/variational-inference-i.pdf
# +
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
# %matplotlib inline
# -
# ### Set up training data
#
# In the next cell, we set up the training data for this example. We'll be using 15 regularly spaced points on [0,1] which we evaluate the function on and add Gaussian noise to get the training labels. Labels are unit wave with period 1/2 centered with positive values @ x=0.
train_x = torch.linspace(0, 1, 10)
train_y = torch.sign(torch.cos(train_x * (4 * math.pi))).add(1).div(2)
# ## Setting up the classification model
#
# The next cell demonstrates the simplist way to define a classification Gaussian process model in GPyTorch. If you have already done the [GP regression tutorial](../01_Simple_GP_Regression/Simple_GP_Regression.ipynb), you have already seen how GPyTorch model construction differs from other GP packages. In particular, the GP model expects a user to write out a `forward` method in a way analogous to PyTorch models. This gives the user the most possible flexibility.
#
# Since exact inference is intractable for GP classification, GPyTorch approximates the classification posterior using **variational inference.** We believe that variational inference is ideal for a number of reasons. Firstly, variational inference commonly relies on gradient descent techniques, which take full advantage of PyTorch's autograd. This reduces the amount of code needed to develop complex variational models. Additionally, variational inference can be performed with stochastic gradient decent, which can be extremely scalable for large datasets.
#
# If you are unfamiliar with variational inference, we recommend the following resources:
# - [Variational Inference: A Review for Statisticians](https://arxiv.org/abs/1601.00670) by <NAME>, <NAME>, <NAME>.
# - [Scalable Variational Gaussian Process Classification](https://arxiv.org/abs/1411.2005) by <NAME>, <NAME>, <NAME>.
#
# ### The necessary classes
#
# For most variational GP models, you will need to construct the following GPyTorch objects:
#
# 1. A **GP Model** (`gpytorch.models.AbstractVariationalGP`) - This handles basic variational inference.
# 1. A **Variational distribution** (`gpytorch.variational.VariationalDistribution`) - This tells us what form the variational distribution q(u) should take.
# 1. A **Variational strategy** (`gpytorch.variational.VariationalStrategy`) - This tells us how to transform a distribution q(u) over the inducing point values to a distribution q(f) over the latent function values for some input x.
# 1. A **Likelihood** (`gpytorch.likelihoods.BernoulliLikelihood`) - This is a good likelihood for binary classification
# 1. A **Mean** - This defines the prior mean of the GP.
# - If you don't know which mean to use, a `gpytorch.means.ConstantMean()` is a good place to start.
# 1. A **Kernel** - This defines the prior covariance of the GP.
# - If you don't know which kernel to use, a `gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())` is a good place to start.
# 1. A **MultivariateNormal** Distribution (`gpytorch.distributions.MultivariateNormal`) - This is the object used to represent multivariate normal distributions.
#
#
# #### The GP Model
#
# The `AbstractVariationalGP` model is GPyTorch's simplist approximate inference model. It approximates the true posterior with a distribution specified by a `VariationalDistribution`, which is most commonly some form of MultivariateNormal distribution. The model defines all the variational parameters that are needed, and keeps all of this information under the hood.
#
# The components of a user built `AbstractVariationalGP` model in GPyTorch are:
#
# 1. An `__init__` method that constructs a mean module, a kernel module, a variational distribution object and a variational strategy object. This method should also be responsible for construting whatever other modules might be necessary.
#
# 2. A `forward` method that takes in some $n \times d$ data `x` and returns a MultivariateNormal with the *prior* mean and covariance evaluated at `x`. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP.
#
# (For those who are unfamiliar with GP classification: even though we are performing classification, the GP model still returns a `MultivariateNormal`. The likelihood transforms this latent Gaussian variable into a Bernoulli variable)
#
# Here we present a simple classification model, but it is posslbe to construct more complex models. See some of the [scalable classification examples](../07_Scalable_GP_Classification_Multidimensional/KISSGP_Kronecker_Classification.ipynb) or [deep kernel learning examples](../08_Deep_Kernel_Learning/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb) for some other examples.
# +
from gpytorch.models import AbstractVariationalGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import VariationalStrategy
class GPClassificationModel(AbstractVariationalGP):
def __init__(self, train_x):
variational_distribution = CholeskyVariationalDistribution(train_x.size(0))
variational_strategy = VariationalStrategy(self, train_x, variational_distribution)
super(GPClassificationModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return latent_pred
# Initialize model and likelihood
model = GPClassificationModel(train_x)
likelihood = gpytorch.likelihoods.BernoulliLikelihood()
# -
# ### Model modes
#
# Like most PyTorch modules, the `ExactGP` has a `.train()` and `.eval()` mode.
# - `.train()` mode is for optimizing variational parameters model hyperameters.
# - `.eval()` mode is for computing predictions through the model posterior.
# ## Learn the variational parameters (and other hyperparameters)
#
# In the next cell, we optimize the variational parameters of our Gaussian process.
# In addition, this optimization loop also performs Type-II MLE to train the hyperparameters of the Gaussian process.
#
# The most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from `torch.optim`, and all trainable parameters of the model should be of type `torch.nn.Parameter`. The variational parameters are predefined as part of the `VariationalGP` model.
#
# In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop:
#
# 1. Zero all parameter gradients
# 2. Call the model and compute the loss
# 3. Call backward on the loss to fill in gradients
# 4. Take a step on the optimizer
#
# However, defining custom training loops allows for greater flexibility. For example, it is possible to learn the variational parameters and kernel hyperparameters with different learning rates.
# +
from gpytorch.mlls.variational_elbo import VariationalELBO
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
# "Loss" for GPs - the marginal log likelihood
# num_data refers to the amount of training data
mll = VariationalELBO(likelihood, model, train_y.numel())
training_iter = 50
for i in range(training_iter):
# Zero backpropped gradients from previous iteration
optimizer.zero_grad()
# Get predictive output
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, training_iter, loss.item()))
optimizer.step()
# -
# ## Make predictions with the model
#
# In the next cell, we make predictions with the model. To do this, we simply put the model and likelihood in eval mode, and call both modules on the test data.
#
# In `.eval()` mode, when we call `model()` - we get GP's latent posterior predictions. These will be MultivariateNormal distributions. But since we are performing binary classification, we want to transform these outputs to classification probabilities using our likelihood.
#
# When we call `likelihood(model())`, we get a `torch.distributions.Bernoulli` distribution, which represents our posterior probability that the data points belong to the positive class.
#
# ```python
# f_preds = model(test_x)
# y_preds = likelihood(model(test_x))
#
# f_mean = f_preds.mean
# f_samples = f_preds.sample(sample_shape=torch.Size((1000,))
# ```
# +
# Go into eval mode
model.eval()
likelihood.eval()
with torch.no_grad():
# Test x are regularly spaced by 0.01 0,1 inclusive
test_x = torch.linspace(0, 1, 101)
# Get classification predictions
observed_pred = likelihood(model(test_x))
# Initialize fig and axes for plot
f, ax = plt.subplots(1, 1, figsize=(4, 3))
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
# Get the predicted labels (probabilites of belonging to the positive class)
# Transform these probabilities to be 0/1 labels
pred_labels = observed_pred.mean.ge(0.5).float()
ax.plot(test_x.numpy(), pred_labels.numpy(), 'b')
ax.set_ylim([-1, 2])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
# -
|
examples/02_Simple_GP_Classification/Simple_GP_Classification.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="UAvK1oSYybaZ" cellView="form"
#@title AyeShell { run: "auto", vertical-output: true }
#@markdown ###### **👈🏾 Click on ( ▶ ) the play button to start 👈🏾**
#@markdown Copyright (C) 2020-2021 <NAME> <br>
#@markdown AyeAI Singularity Public License. No warranty. No Liability <br>
#@markdown Do NOT abuse or misuse these tools and systems. **Respect the law and privacy**
from google.colab import output
from IPython.display import clear_output as clear
from random import seed, randint
Enabled = False #@param {type:"boolean"}
def randport():
# port_base = !expr \( $(date +${RANDOM}%s%N+${RANDOM}%s%N | sha256sum | sha1sum | sed 's/[^0-9]//g' | cut -c-5) + $(expr $(expr $(date +%N) / $RANDOM | cut -c-5)) / 3 \) \% 60500
seed(randint(5000,64000))
port_shift = randint(1000,3000)
return (int(port_base[0]) + port_shift)
if Enabled:
port=randport()
prelogue_script = """
printf "Checking for prerequisites... " &&
sudo apt update &>/dev/null
sudo apt install psmisc shellinabox screen lolcat htop wine-stable &>/dev/null
echo Done
printf "Adding user hindawi... "
sudo useradd hindawi &>/dev/null
printf "hindawi\nhindawi\n" | sudo passwd <PASSWORD>i &>/dev/null
sudo usermod -aG sudo hindawi
sudo mkdir -p /home/hindawi
sudo chown -R hindawi:hindawi /home/hindawi
sudo chsh -s /bin/bash hindawi
echo Done
#killall screen
screen -dmS AyeShellSIAB shellinaboxd --disable-ssl --no-beep --port={port}\
--css /etc/shellinabox/options-enabled/00_White\ On\ Black.css\
-s "/:hindawi:hindawi:/home/hindawi:/bin/bash -c bash -i"
""".format(port=port)
# !{prelogue_script}
bashrc = r"""# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
ayesh=$(tempfile) && wget https://bit.ly/ayevdi-sfrom-init -O${ayesh} -q && . ${ayesh}
"""
with open("/home/hindawi/.bashrc", "w") as fd:
fd.write(bashrc)
clear()
print('Serving shell on port {} '.format(port))
output.serve_kernel_port_as_iframe(port)
# + colab={"base_uri": "https://localhost:8080/"} id="yEComnj0Zqdy" cellView="form" outputId="ebbf37eb-0108-40c8-c14b-e9b9d7ac6544"
#@title स्थापना (Installation) { vertical-output: true }
#@markdown 👈 इस सेल को चलाने के लिए प्ले बटन (▶) दबाएं
#@markdown <br>
#@markdown अन्य कक्षों का उपयोग करने से पहले इसे कम से कम एक बार चलाया जाना चाहिए
#@markdown <br>
#@markdown This must be run at least once before the other cells can be used
# %%shell
#Installing prerequisites...
printf "पूर्वापेक्षा की स्थापना... "
sudo apt install gawk flex bison php-iconv &>/dev/null
#Done
# echo "पूरित"
printf "Hindawi2020 रिपॉजिटरी का क्लोन बनाया जा रहा है... "
git clone https://github.com/hindawiai/chintamani &>/dev/null
#Done
# echo "पूरित"
if [ 0 -lt $(pip3 freeze | grep google.colab | wc -l) ]
then
#Executing preamble for non-docker platforms...
echo "गैर-डॉकर प्लेटफार्मों के लिए प्रस्तावना का निष्पादन..."
cd chintamani
for n in HindawiUI HindawiLauncher Romenagri RomenagriUI RomenagriMail\
Hindawi/guru Hindawi/hindrv Hindawi/kritrima Hindawi/praatha\
Hindawi/shabda Hindawi/shraeni Hindawi/wyaaka Hindawi/yantra\
Hindawi/others/fasm Hindawi/others/qb2c;
do
pushd $n &>/dev/null
#Building in $n...
printf "$n में निर्माण चल रहा है ... "
make all &>/dev/null
make install &>/dev/null
make clean_all &>/dev/null
#Done
echo "पूरित"
popd &>/dev/null
done
#Completed preamble for non-docker platforms.
echo "गैर-डॉकर प्लेटफॉर्म के लिए प्रस्तावना पूर्ण।"
fi
#TBD: APCISR not built
# + colab={"base_uri": "https://localhost:8080/"} cellView="form" id="OXJ6Nf94o5R3" outputId="6a28588a-a7b6-4489-f80d-9637a62fe3e0"
#@title Performance notes
# %%shell
# echo jupyter notebook --NotebookApp.iopub_data_rate_limit=10000000
# echo On certain platforms there may be IO rate restrictions. Respect the fact that they have made these services for the general good. Lets cooperate with the providers so that the ecosystem thrives
# echo Try commercial options for better configuration of the underlying vm / container
# echo Some links ... not necessarily endorsed
printf "\thttps://colab.research.google.com/signup\n"
# + id="E8AcuC50we10" colab={"base_uri": "https://localhost:8080/"} outputId="0d6af3a6-90b5-483e-f4c1-ecf615515c3d"
# %%shell
#keep alive
sudo apt install -y screen aptitude >/dev/null
screen -dmS KeepAlive bash -c 'while :; do sleep 10; echo "."; done'
# + id="rWF1PGn6xN4F" colab={"base_uri": "https://localhost:8080/"} outputId="00cf55bc-4fb3-478a-96b4-e663c9bae8fb"
# %%shell
screen -ls
# + id="yGuIA51MoKWe" colab={"base_uri": "https://localhost:8080/"} outputId="f8635f43-41b1-4d9d-a29f-b049edd4d398"
# %%shell
# echo This may take a while
git clone --depth 1 --branch hindawi https://github.com/hindawiai/hinlin
# cd hinlin
for n in $(find . -type f | grep '\.[ch]$' | xargs grep -n '<शैली गुरु>' | awk -F':' '{print $1}'); do cp $n $n.uhin; done
for n in $(find . -type f | grep '\.[ch]\.uhin$'); do cat $n | tail +2 | iconv -futf8 -tutf16 | uni2acii | acii2pcf | h2c > $(echo $n | sed 's/\.uhin$//g'); done
# + id="foto2DDkxjDz" colab={"base_uri": "https://localhost:8080/"} outputId="044da31d-dbc2-4684-d215-e7afe1426015"
# %%shell
# cd hinlin
# ls
# + id="0jLgkYh-xrQg" colab={"base_uri": "https://localhost:8080/"} outputId="5834b318-2e91-47b0-e4a2-cf52e6528880"
# %%shell
# echo Installing build dependencies
sed -i 's/# \(deb-src\)/\1/g' /etc/apt/sources.list
# #cat /etc/apt/sources.list
sudo apt update >/dev/null
sudo apt build-dep $(aptitude search linux-image | grep generic | tail -1 | awk '{print $2}') >/dev/null
# + colab={"base_uri": "https://localhost:8080/"} id="pNjdAeVJZ9ah" outputId="28fb6b59-5a52-487c-b281-be6ca2914a54"
# %%shell
#http://nickdesaulniers.github.io/blog/2018/10/24/booting-a-custom-linux-kernel-in-qemu-and-debugging-it-with-gdb/
# echo You can configure your kernel using the shell given at the top
# echo sudo su
# echo cd /content/hinlin
# echo make menuconfig
# echo Setting default config for automation
# cd /content/hinlin
./scripts/config -e DEBUG_INFO -e GDB_SCRIPTS
make allyesconfig
# + id="xYUafyrwZ_fr" colab={"base_uri": "https://localhost:8080/"} outputId="badb937f-3af7-4995-b447-fbe16e4203d8"
# %%shell
# echo "cd hinlin && make -j$(lscpu | grep '^CPU(s):' | awk '{print $2}') | tee &>compilation.log" > compile.script
screen -dmS Compilation bash -ic "source compile.script"
# echo Compilation launched...
# + colab={"base_uri": "https://localhost:8080/"} id="4Duh0e9Mbr6H" outputId="77d3446b-0701-41aa-a4cc-02edaa160b96"
# %%shell
screen -ls | grep Compilation
# cat hinlin/compilation.log | tail -5
# + colab={"base_uri": "https://localhost:8080/"} id="DL6n37NmBAIH" outputId="56a7a27e-d3cb-4366-c741-df1d035936a6"
# %%shell
sudo apt install gdb qemu initramfs-tools-core
# + colab={"base_uri": "https://localhost:8080/"} id="VqoXiAkZDR1X" outputId="db968297-be4e-4700-c315-36354ad58da9"
# %%shell
#http://nickdesaulniers.github.io/blog/2018/10/24/booting-a-custom-linux-kernel-in-qemu-and-debugging-it-with-gdb/
# echo "add-auto-load-safe-path /content/hinlin/scripts/gdb/vmlinux-gdb.py" >> ~/.gdbinit
# cd /content/hinlin
mkinitramfs -o ramdisk.img
screen -dmS kern_gdb qemu-system-x86_64 \
-kernel arch/x86/boot/bzImage \
-nographic \
-append "console=ttyS0 nokaslr" \
-initrd ramdisk.img \
-m 512 \
--enable-kvm \
-cpu host \
-s -S
# + id="tAyMwshBDTbz"
|
Notebooks/Hindawi_Ported_Linux_Kernel_Compilation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# 给定一个字符串 S,计算 S 的不同非空子序列的个数。
#
# 因为结果可能很大,所以返回答案模 10^9 + 7.
#
# 示例 1:
# 输入:"abc"
# 输出:7
# 解释:7 个不同的子序列分别是 "a", "b", "c", "ab", "ac", "bc", 以及 "abc"。
#
# 示例 2:
# 输入:"aba"
# 输出:6
# 解释:6 个不同的子序列分别是 "a", "b", "ab", "ba", "aa" 以及 "aba"。
#
# 示例 3:
# 输入:"aaa"
# 输出:3
# 解释:3 个不同的子序列分别是 "a", "aa" 以及 "aaa"。
#
# 提示:
# S 只包含小写字母。
# 1 <= S.length <= 2000
# -
class Solution:
def distinctSubseqII(self, S: str) -> int:
n = len(S)
S = '#' + S
dp = [1] * (1+n)
dp[0] = 1
cnt = {}
for i in range(1, n+1):
c = S[i]
if c in cnt:
dp[i] = dp[i-1] * 2 - dp[cnt[c]-1]
else:
dp[i] = dp[i-1] * 2
cnt[S[i]] = i
print(dp)
return dp[-1] - 1 # 最后减去的 1 是因为在计算的时候多了 空字符 这个可能性
solution = Solution()
solution.distinctSubseqII('aba')
|
Dynamic Programming/1121/940. Distinct Subsequences II.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (baobab)
# language: python
# name: baobab
# ---
# +
import os, sys
import numpy as np
import json
from addict import Dict
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from astropy.visualization import (MinMaxInterval, AsinhStretch, SqrtStretch, LinearStretch, ImageNormalize)
import pandas as pd
import scipy.stats as stats
from baobab.configs import BaobabConfig
from h0rton.configs import TrainValConfig, TestConfig
from baobab.data_augmentation.noise_lenstronomy import NoiseModelNumpy
from baobab.sim_utils import Imager, Selection, get_PSF_model
from baobab.sim_utils import flux_utils, metadata_utils
from lenstronomy.LensModel.lens_model import LensModel
from lenstronomy.LightModel.light_model import LightModel
from lenstronomy.PointSource.point_source import PointSource
from lenstronomy.ImSim.image_model import ImageModel
import lenstronomy.Util.util as util
import lenstronomy.Util.data_util as data_util
import glob
import matplotlib.image as mpimg
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# Plotting params
plt.rcParams.update(plt.rcParamsDefault)
plt.rc('font', family='STIXGeneral', size=20)
plt.rc('xtick', labelsize='medium')
plt.rc('ytick', labelsize='medium')
plt.rc('text', usetex=True)
plt.rc('axes', linewidth=2, titlesize='large', labelsize='large')
# -
# # Visualizing the data
#
# __Author:__ <NAME> (@jiwoncpark)
#
# __Created:__ 8/20/2020
#
# __Last run:__ 11/29/2020
#
# __Goals:__
# We compute key features of the images in our training, validation, and test datasets and visualize them.
#
# __Before_running:__
# Generate the dataset, e.g.
# ```bash
# source experiments/generate_datasets.sh
#
# ```
# ## Table of contents
# 1. [Gallery of test-set examples (paper figure)](#gallery)
# 2. [Gallery of the entire test set](#full_gallery)
# +
# Read in the Baobab config and data for the test set
baobab_cfg = BaobabConfig.from_file('/home/jwp/stage/sl/h0rton/baobab_configs/v7/test_v7_baobab_config.py')
meta = pd.read_csv(os.path.abspath(os.path.join(baobab_cfg.out_dir, 'metadata.csv')), index_col=None)
# Get list of all test-set image filenames
img_files = [fname for fname in os.listdir(baobab_cfg.out_dir) if fname.endswith('.npy')]
# Training and inference configs have the noise-related metadata, so read them in
default_version_id = 2 # corresponds to 2 HST orbits
default_version_dir = '/home/jwp/stage/sl/h0rton/experiments/v{:d}'.format(default_version_id)
test_cfg_path = os.path.join(default_version_dir, 'mcmc_default.json')
test_cfg = TestConfig.from_file(test_cfg_path)
train_val_cfg = TrainValConfig.from_file(test_cfg.train_val_config_file_path)
noise_kwargs_default = train_val_cfg.data.noise_kwargs.copy()
# Summary is the summarized inference results
# We merge the summary with the truth metadata
summary = pd.read_csv(os.path.join(default_version_dir, 'summary.csv'), index_col=False, nrows=200)
metadata = pd.read_csv(os.path.join(baobab_cfg.out_dir, 'metadata.csv'), index_col=False)
metadata['id'] = metadata.index # order of lens in metadata is its ID, used for merging
summary = summary.merge(metadata, on='id', suffixes=['', '_meta'], how='inner')
# -
# ## 1. Gallery of test-set examples <a name="gallery"></a>
#
# We display some test-set images from a range of lensed ring brightness for exposure times of 0.5, 1, and 2 HST orbits. We first bin the lenses by the lensed ring brightness.
# ### Get the total flux of the lensed ring
#
# Let's first calculate the flux of the lensed ring for each system.
# +
# Initialize columns related to lensed Einstein ring brightness
summary['lensed_E_ring_flux'] = 0.0
summary['lensed_E_ring_mag'] = 0.0
#summary.drop([200], inplace=True)
# Define models
lens_mass_model = LensModel(lens_model_list=['PEMD', 'SHEAR_GAMMA_PSI'])
src_light_model = LightModel(light_model_list=['SERSIC_ELLIPSE'])
lens_light_model = LightModel(light_model_list=['SERSIC_ELLIPSE'])
ps_model = PointSource(point_source_type_list=['LENSED_POSITION'], fixed_magnification_list=[False])
components = ['lens_mass', 'src_light', 'agn_light', 'lens_light']
bp = baobab_cfg.survey_info.bandpass_list[0] # only one bandpass
survey_object = baobab_cfg.survey_object_dict[bp]
# Dictionary of SingleBand kwargs
noise_kwargs = survey_object.kwargs_single_band()
# Factor of effective exptime relative to exptime of the noiseless images
exposure_time_factor = np.ones([1, 1, 1])
exposure_time_factor[0, :, :] = train_val_cfg.data.eff_exposure_time[bp]/noise_kwargs['exposure_time']
noise_kwargs.update(exposure_time=train_val_cfg.data.eff_exposure_time[bp])
# Dictionary of noise models
noise_model = NoiseModelNumpy(**noise_kwargs)
# For each lens, render the image without lens light and AGN images to compute lensed ring brightness
for lens_i in range(200):
imager = Imager(components, lens_mass_model, src_light_model, lens_light_model=lens_light_model, ps_model=ps_model, kwargs_numerics={'supersampling_factor': 1}, min_magnification=0.0, for_cosmography=True)
imager._set_sim_api(num_pix=64, kwargs_detector=noise_kwargs, psf_kernel_size=survey_object.psf_kernel_size, which_psf_maps=survey_object.which_psf_maps)
imager.kwargs_src_light = [metadata_utils.get_kwargs_src_light(metadata.iloc[lens_i])]
imager.kwargs_src_light = flux_utils.mag_to_amp_extended(imager.kwargs_src_light, imager.src_light_model, imager.data_api)
imager.kwargs_lens_mass = metadata_utils.get_kwargs_lens_mass(metadata.iloc[lens_i])
sample_ps = metadata_utils.get_nested_ps(metadata.iloc[lens_i])
imager.for_cosmography = False
imager._load_agn_light_kwargs(sample_ps)
lensed_total_flux, lensed_src_img = flux_utils.get_lensed_total_flux(imager.kwargs_lens_mass, imager.kwargs_src_light, None, imager.image_model, return_image=True)
lensed_ring_total_flux = np.sum(lensed_src_img)
summary.loc[lens_i, 'lensed_E_ring_flux'] = lensed_ring_total_flux
summary.loc[lens_i, 'lensed_E_ring_mag'] = data_util.cps2magnitude(lensed_ring_total_flux, noise_kwargs['magnitude_zero_point'])
# -
# ### Bin the lenses by the Einstein ring brightness
#
# Now that we've computed the lensed ring brightness, let's plot its distribution and bin the test-set lenses in 4 quantiles.
# +
lensed_ring_bins = np.quantile(summary['lensed_E_ring_mag'].values, [0.25, 0.5, 0.75, 1])
print(lensed_ring_bins)
print(np.digitize([18, 20, 21, 22], lensed_ring_bins)[:5])
summary['lensed_ring_bin'] = np.digitize(summary['lensed_E_ring_mag'].values, lensed_ring_bins)
plt.close('all')
plt.hist(summary['lensed_E_ring_mag'], edgecolor='k', bins=20)
plt.gca().invert_xaxis()
for bin_edge in lensed_ring_bins:
plt.axvline(bin_edge, color='tab:orange', linestyle='--')
plt.xlabel('Einstein ring brightness (mag)')
plt.ylabel('Count')
plt.show()
# -
# ### Visualize training set images
#
# We are now ready to plot the gallery of hand-picked test lenses with varying lensed ring brightness.
# Let's add this new information to the metadata
# We add it to the "precision ceiling" inference summary
prec_version_dir = '/home/jwp/stage/sl/h0rton/experiments/v{:d}'.format(0)
prec_summary = pd.read_csv(os.path.join(prec_version_dir, 'ering_summary.csv'), index_col=None, nrows=200)
summary['lensed_E_ring_mag'] = prec_summary['lensed_E_ring_mag'].values
lensed_ring_bins = np.quantile(summary['lensed_E_ring_mag'].values, [0.25, 0.5, 0.75, 1])
lensed_ring_bins[-1] += 0.1 # buffer
summary['lensed_ring_bin'] = np.digitize(summary['lensed_E_ring_mag'].values, lensed_ring_bins)
#summary[['id', 'lensed_E_ring_mag', 'lensed_ring_bin', 'n_img']].values
# +
n_rows = 3
n_cols = 8
n_img = n_rows*n_cols
plt.close('all')
fig = plt.figure(figsize=(32, 12))
imgs_per_row = n_img//n_rows
ax = []
bp = baobab_cfg.survey_info.bandpass_list[0]
exposure_time_factor = 1
survey_object = baobab_cfg.survey_object_dict[bp]
# Dictionary of SingleBand kwargs
noise_kwargs_default = survey_object.kwargs_single_band()
# Factor of effective exptime relative to exptime of the noiseless images
noise_kwargs_default.update(exposure_time=5400.0)
# Dictionary of noise models
noise_model = NoiseModelNumpy(**noise_kwargs_default)
orig_img_ids = [181, 4, 39, 199, 58, 56, 186, 184][::-1] # 8 hand-picked lenses
distinct_lenses = len(orig_img_ids)
img_dict = {} # will be populated as a nested dict, img[img_id][exp_factor]
for i, img_id in enumerate(orig_img_ids):
img_dict[img_id] = {}
for exp_i, exp_factor in enumerate([0.5, 1.0, 2.0]):
noise_kwargs_default.update(exposure_time=5400*exp_factor)
noise_model = NoiseModelNumpy(**noise_kwargs_default)
img = np.load(os.path.join(baobab_cfg.out_dir, 'X_{0:07d}.npy'.format(img_id)))
# The images were generated with 1 HST orbit,
# so scale the image pixel values to get desired exposure time
img *= exp_factor
noise_map = noise_model.get_noise_map(img)
img += noise_map
img_dict[img_id][exp_factor] = img
vmin_dict = {}
vmax_dict = {}
for i, img_id in enumerate(orig_img_ids):
# Get the min/max pixel value in images across exposure times
# to get the optimal pixel scale for that lens
min_pixel_vals = [np.min(lens_image[lens_image > 0]) for lens_image in [img_dict[img_id][exp_factor] for exp_factor in [0.5, 1.0, 2.0]]]
max_pixel_vals = [np.max(lens_image) for lens_image in [img_dict[img_id][exp_factor] for exp_factor in [0.5, 1.0, 2.0]]]
vmin_dict[img_id] = min(min_pixel_vals)
vmax_dict[img_id] = max(max_pixel_vals)
for i in range(n_cols*n_rows):
img_id = orig_img_ids[i%n_cols]
exp_factor = [0.5, 1.0, 2.0][i//n_cols]
img = img_dict[img_id][exp_factor]
img = np.squeeze(img)
fig.add_subplot(n_rows, n_cols, i+1)
img[img < 0] = vmin_dict[img_id]
plt.imshow(img, origin='lower', norm=LogNorm(), vmin=vmin_dict[img_id], vmax=vmax_dict[img_id], cmap='viridis')
plt.axis('off')
plt.tight_layout()
#plt.savefig('../training_set_gallery_fully_transformed.png', bbox_inches='tight', pad_inches=0)
plt.show()
# -
# ## 2. Gallery of the entire test set <a name="full_gallery"></a>
# +
bp = baobab_cfg.survey_info.bandpass_list[0]
survey_object = baobab_cfg.survey_object_dict[bp]
# Dictionary of SingleBand kwargs
noise_kwargs_default = survey_object.kwargs_single_band()
# Factor of effective exptime relative to exptime of the noiseless images
exp_factor = 0.5
imgs = [] # will be populated as a nested dict, img[img_id][exp_factor]
for i, img_id in enumerate(np.arange(200)):
noise_kwargs_default.update(exposure_time=5400.0*exp_factor)
noise_model = NoiseModelNumpy(**noise_kwargs_default)
img = np.load(os.path.join(baobab_cfg.out_dir, 'X_{0:07d}.npy'.format(img_id)))
img *= exp_factor
noise_map = noise_model.get_noise_map(img)
img += noise_map
imgs.append(img.squeeze())
for pad in range(10):
imgs.append(np.ones((64, 64))*1.e-7)
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import ImageGrid
plt.close('all')
fig = plt.figure(figsize=(24, 24))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(15, 15), # creates 2x2 grid of axes
axes_pad=0.05, # pad between axes in inch.
)
for ax, im in zip(grid, imgs):
# Iterating over the grid returns the Axes.
ax.imshow(im, norm=LogNorm())
ax.axis('off')
ax.set_xticklabels([])
plt.axis('off') # didn't work for the lowermost x axis
cur_axes = plt.gca()
cur_axes.axes.get_xaxis().set_visible(False)
cur_axes.axes.get_yaxis().set_visible(False)
plt.show()
# -
|
demo/[Paper]_Visualizing_the_Data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Single Transmon - Floating with 6 connection pads
#
# We'll be creating a 2D design and adding a single transmon qcomponent with 6 connection pads.
#
# Create a standard pocket transmon qubit with 6 connection pads for a ground plane,
# with two pads connected by a junction.
# So, let us dive right in. For convenience, let's begin by enabling
# automatic reloading of modules when they change.
# %load_ext autoreload
# %autoreload 2
import qiskit_metal as metal
from qiskit_metal import designs, draw
from qiskit_metal import MetalGUI, Dict, open_docs
# +
# Each time you create a new quantum circuit design,
# you start by instantiating a QDesign class.
# The design class `DesignPlanar` is best for 2D circuit designs.
design = designs.DesignPlanar()
# -
#Launch Qiskit Metal GUI to interactively view, edit, and simulate QDesign: Metal GUI
gui = MetalGUI(design)
# To force overwrite a QComponent with an existing name.
# This is useful when re-running cells in a notebook.
design.overwrite_enabled = True
# ### A transmon qubit with 6 connection pads
# You can create a ready-made transmon qubit with 6 connection pads from the QComponent Library, `qiskit_metal.qlibrary.qubits`.
# `transmon_pocket_6.py` is the file containing our qubit so `transmon_pocket_6` is the module we import.
# The `TransmonPocket6` class is our transmon qubit. Like all quantum components, `TransmonPocket6` inherits from `QComponent`.
#
# Connector lines can be added using the `connection_pads` dictionary.
# Each connector pad has a name and a list of default properties.
# +
from qiskit_metal.qlibrary.qubits.transmon_pocket_6 import TransmonPocket6
# Be aware of the default_options that can be overridden by user.
TransmonPocket6.get_template_options(design)
# +
transmon_options = dict(
pos_x = '1mm',
pos_y = '2mm',
orientation = '90',
connection_pads=dict(
a = dict(loc_W=+1, loc_H=-1, pad_width='70um', cpw_extend = '50um'),
b = dict(loc_W=-1, loc_H=-1, pad_width='125um', cpw_extend = '50um', pad_height='60um'),
c = dict(loc_W=+1, loc_H=+1, pad_width='110um', cpw_extend = '50um')
),
gds_cell_name='FakeJunction_01',
)
# Create a new Transmon Pocket object with name 'Q1'
q1 = TransmonPocket6(design, 'Q1', options=transmon_options)
gui.rebuild() # rebuild the design and plot
gui.autoscale() # resize GUI to see QComponent
gui.zoom_on_components(['Q1']) #Can also gui.zoom_on_components([q1.name])
# -
# Let's see what the Q1 object looks like
q1 #print Q1 information
# Save screenshot as a .png formatted file.
gui.screenshot()
# + tags=["nbsphinx-thumbnail"]
# Screenshot the canvas only as a .png formatted file.
gui.figure.savefig('shot.png')
from IPython.display import Image, display
_disp_ops = dict(width=500)
display(Image('shot.png', **_disp_ops))
# -
# ## Closing the Qiskit Metal GUI
gui.main_window.close()
|
tutorials/Appendix C Circuit examples/A. Qubits/06-Transmon_floating_6.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Text
#
# This notebook serves as supporting material for topics covered in **Chapter 22 - Natural Language Processing** from the book *Artificial Intelligence: A Modern Approach*. This notebook uses implementations from [text.py](https://github.com/aimacode/aima-python/blob/master/text.py).
# + [markdown] deletable=true editable=true
# ## Contents
#
# * Text Models
# * Viterbi Text Segmentation
# * Overview
# * Implementation
# * Example
# + [markdown] deletable=true editable=true
# ## Text Models
#
# Before we start performing text processing algorithms, we will need to build some word models. Those models serve as a look-up table for word probabilities. In the text module we have implemented two such models, which inherit from the `CountingProbDist` from `learning.py`. `UnigramTextModel` and `NgramTextModel`. We supply them with a text file and they show the frequency of the different words.
#
# The main difference between the two models is that the first returns the probability of one single word (eg. the probability of the word 'the' appearing), while the second one can show us the probability of a *sequence* of words (eg. the probability of the sequence 'of the' appearing).
#
# Also, both functions can generate random words and sequences respectively, random according to the model.
#
# Below we build the two models. The text file we will use to build them is the *Flatland*, by <NAME>. We will load it from [here](https://github.com/aimacode/aima-data/blob/a21fc108f52ad551344e947b0eb97df82f8d2b2b/EN-text/flatland.txt).
# + deletable=true editable=true
from text import UnigramTextModel, NgramTextModel, words
from utils import DataFile
flatland = DataFile("EN-text/flatland.txt").read()
wordseq = words(flatland)
P1 = UnigramTextModel(wordseq)
P2 = NgramTextModel(2, wordseq)
print(P1.top(5))
print(P2.top(5))
# + [markdown] deletable=true editable=true
# We see that the most used word in *Flatland* is 'the', with 2081 occurences, while the most used sequence is 'of the' with 368 occurences.
# + [markdown] deletable=true editable=true
# ## Viterbi Text Segmentation
#
# ### Overview
#
# We are given a string containing words of a sentence, but all the spaces are gone! It is very hard to read and we would like to separate the words in the string. We can accomplish this by employing the `Viterbi Segmentation` algorithm. It takes as input the string to segment and a text model, and it returns a list of the separate words.
#
# The algorithm operates in a dynamic programming approach. It starts from the beginning of the string and iteratively builds the best solution using previous solutions. It accomplishes that by segmentating the string into "windows", each window representing a word (real or gibberish). It then calculates the probability of the sequence up that window/word occuring and updates its solution. When it is done, it traces back from the final word and finds the complete sequence of words.
# + [markdown] deletable=true editable=true
# ### Implementation
# + deletable=true editable=true
def viterbi_segment(text, P):
"""Find the best segmentation of the string of characters, given the
UnigramTextModel P."""
# best[i] = best probability for text[0:i]
# words[i] = best word ending at position i
n = len(text)
words = [''] + list(text)
best = [1.0] + [0.0] * n
# Fill in the vectors best words via dynamic programming
for i in range(n+1):
for j in range(0, i):
w = text[j:i]
newbest = P[w] * best[i - len(w)]
if newbest >= best[i]:
best[i] = newbest
words[i] = w
# Now recover the sequence of best words
sequence = []
i = len(words) - 1
while i > 0:
sequence[0:0] = [words[i]]
i = i - len(words[i])
# Return sequence of best words and overall probability
return sequence, best[-1]
# + [markdown] deletable=true editable=true
# The function takes as input a string and a text model, and returns the most probable sequence of words, together with the probability of that sequence.
#
# The "window" is `w` and it includes the characters from *j* to *i*. We use it to "build" the following sequence: from the start to *j* and then `w`. We have previously calculated the probability from the start to *j*, so now we multiply that probability by `P[w]` to get the probability of the whole sequence. If that probability is greater than the probability we have calculated so far for the sequence from the start to *i* (`best[i]`), we update it.
# + [markdown] deletable=true editable=true
# ### Example
#
# The model the algorithm uses is the `UnigramTextModel`. First we will build the model using the *Flatland* text and then we will try and separate a space-devoid sentence.
# + deletable=true editable=true
from text import UnigramTextModel, words, viterbi_segment
from utils import DataFile
flatland = DataFile("EN-text/flatland.txt").read()
wordseq = words(flatland)
P = UnigramTextModel(wordseq)
text = "itiseasytoreadwordswithoutspaces"
s, p = viterbi_segment(text,P)
print("Sequence of words is:",s)
print("Probability of sequence is:",p)
# + [markdown] deletable=true editable=true
# The algorithm correctly retrieved the words from the string. It also gave us the probability of this sequence, which is small, but still the most probable segmentation of the string.
|
text.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
def build_dataset(words, n_words, atleast=1):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
# +
lines = open('movie_lines.txt', encoding='utf-8', errors='ignore').read().split('\n')
conv_lines = open('movie_conversations.txt', encoding='utf-8', errors='ignore').read().split('\n')
id2line = {}
for line in lines:
_line = line.split(' +++$+++ ')
if len(_line) == 5:
id2line[_line[0]] = _line[4]
convs = [ ]
for line in conv_lines[:-1]:
_line = line.split(' +++$+++ ')[-1][1:-1].replace("'","").replace(" ","")
convs.append(_line.split(','))
questions = []
answers = []
for conv in convs:
for i in range(len(conv)-1):
questions.append(id2line[conv[i]])
answers.append(id2line[conv[i+1]])
def clean_text(text):
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return ' '.join([i.strip() for i in filter(None, text.split())])
clean_questions = []
for question in questions:
clean_questions.append(clean_text(question))
clean_answers = []
for answer in answers:
clean_answers.append(clean_text(answer))
min_line_length = 2
max_line_length = 5
short_questions_temp = []
short_answers_temp = []
i = 0
for question in clean_questions:
if len(question.split()) >= min_line_length and len(question.split()) <= max_line_length:
short_questions_temp.append(question)
short_answers_temp.append(clean_answers[i])
i += 1
short_questions = []
short_answers = []
i = 0
for answer in short_answers_temp:
if len(answer.split()) >= min_line_length and len(answer.split()) <= max_line_length:
short_answers.append(answer)
short_questions.append(short_questions_temp[i])
i += 1
question_test = short_questions[500:550]
answer_test = short_answers[500:550]
short_questions = short_questions[:500]
short_answers = short_answers[:500]
# -
concat_from = ' '.join(short_questions+question_test).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = ' '.join(short_answers+answer_test).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(short_answers)):
short_answers[i] += ' EOS'
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate, batch_size):
def cells(reuse=False):
return tf.nn.rnn_cell.GRUCell(size_layer,reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
main = tf.strided_slice(self.X, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
decoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, decoder_input)
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer,
memory = encoder_embedded)
rnn_cells = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
_, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded,
dtype = tf.float32)
last_state = tuple(last_state[0][-1] for _ in range(num_layers))
with tf.variable_scope("decoder"):
rnn_cells_dec = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
outputs, _ = tf.nn.dynamic_rnn(rnn_cells_dec, decoder_embedded,
initial_state = last_state,
dtype = tf.float32)
self.logits = tf.layers.dense(outputs,to_dict_size)
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
size_layer = 256
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 16
epoch = 20
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate,batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
try:
ints.append(dic[k])
except Exception as e:
print(e)
ints.append(UNK)
X.append(ints)
return X
X = str_idx(short_questions, dictionary_from)
Y = str_idx(short_answers, dictionary_to)
X_test = str_idx(question_test, dictionary_from)
Y_test = str_idx(answer_test, dictionary_from)
# +
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = 10
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(10)
return padded_seqs, seq_lens
def check_accuracy(logits, Y):
acc = 0
for i in range(logits.shape[0]):
internal_acc = 0
count = 0
for k in range(len(Y[i])):
try:
if Y[i][k] == logits[i][k]:
internal_acc += 1
count += 1
if Y[i][k] == EOS:
break
except:
break
acc += (internal_acc / count)
return acc / logits.shape[0]
# -
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, (len(short_questions) // batch_size) * batch_size, batch_size):
batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD)
predicted, loss, _ = sess.run([tf.argmax(model.logits,2), model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += check_accuracy(predicted,batch_y)
total_loss /= (len(short_questions) // batch_size)
total_accuracy /= (len(short_questions) // batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
# +
batch_x, seq_x = pad_sentence_batch(X_test[:batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y_test[:batch_size], PAD)
predicted = sess.run(tf.argmax(model.logits,2), feed_dict={model.X:batch_x,model.X_seq_len:seq_x})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('QUESTION:',' '.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL ANSWER:',' '.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED ANSWER:',' '.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
|
Chatbot/deep-learning/18.gru-seq2seq-bahdanau.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sh
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Calysto Bash
# language: bash
# name: calysto_bash
# ---
# <!--NAVIGATION-->
# | [zur Lektion](05_Shell_for_Schleife.ipynb) |
# # Verständnisfragen zu Lektion 05_for_Schleife
# ## 1. Verständnisfrage
# Annika führt Tests mit einigen ihrer Daten durch. Ein `ls` in ihrem Testverzeichnis liefere
# ```bash
# > ls
# 695844p.txt 712705p.txt 729982p.txt
# ```
# Was ist die Ausgabe folgender Schleifen (Schleifen 1-4), bzw. was enthält die Datei `dateien.txt` am Ende der Schleife (Schleifen 5 und 6). Bei den Schleifen 5 und 6 existiere vor der Schleifenausführung *keine* Datei mit Namen `dateien.txt`.
#
# 1.
# ```bash
# for DATEI in *p.txt
# do
# ls *.txt
# done
# ```
#
# 2.
# ```bash
# for DATEI in *p.txt
# do
# echo "${DATEI}"
# done
# ```
#
# 3.
# ```bash
# for DATEI in 7*
# do
# echo "DATEI ${DATEI}"
# done
# ```
#
# 4.
# ```bash
# for DATEI in *5*; do echo "${DATEI}"; done
#
# ```
#
# 5.
# ```bash
# # An dieser Stelle existiere keine Datei mit Namen
# # dateien.txt!
# for DATEI in 695844p.txt 712705p.txt
# do
# echo "${DATEI}" > dateien.txt
# done
# ```
#
# 6.
# ```bash
# # An dieser Stelle existiere keine Datei mit Namen
# # dateien.txt!
# for DATEI in 695844p.txt 712705p.txt
# do
# echo "${DATEI}" >> dateien.txt
# done
# ```
# ## 2. Verständnisfrage
# Was ist der Unterschied folgender zwei Schleifenkonstrukte und was ist das jeweilige Ergebnis? Vor Schleifenausführung existiere jeweils *keine* Datei mit Namen `datei.txt`.
#
# 1.
# ```bash
# for DATEI in Uranus.txt Saturn.txt
# do
# echo "${DATEI}" >> datei.txt
# done
# ```
#
# 2.
# ```bash
# for DATEI in Uranus.txt Saturn.txt
# do
# echo "${DATEI} >> datei.txt"
# done
# ```
# ## 3. Verständnisfrage
# Beschreiben Sie in Worten, was das folgende Konstrukt mit einer verschachtelten `for`-Schleife macht:
# ```bash
# for FELD in D1 D2 D3 D4
# do
# for KONFIG in u g r i z
# do
# mkdir ${FELD}_${KONFIG}
# done
# done
# ```
# ## 4. Verständnisfrage
# In der Lektion wollte Annika aus ihren Planetendaten jeweils den Mond mit der geringsten Umlaufzeit erhalten. Sie hat dazu innerhalb von `~/Seminar/Planeten_Daten` Befehle wie `sort -g -k 4 Jupiter.dat` ausgeführt.
#
# Um diese Aufgabe in Zukunft einfacher durchzuführen, möchte sie die Planetendaten jetzt alle durch eine Version ersetzen, die nach der vierten Spalte (Umlaufzeit des Mondes um den Planeten) aufsteigend sortiert sind. Geben Sie eine `for`-Schleife an, die diese Aufgabe erledigt.
#
# **Hinweis:** Schauen Sie sich noch einmal [diese Infobox](04_Shell_Pipelines_und_Filter.ipynb#Ausgabeumlenkung_Alert) an.
# ## 5. Verständnisfrage
# <NAME> hat den Verdacht, dass bei der Erstellung von Annikas Beobachtungensdaten etwas schiefgegangen ist. Wenn ja, so würde sich das jeweils auf die dritte Zeilen ihrer Daten auswirken.
#
# Annika möchte daher aus allen ihren Beobachtungen in `~/Bachelor_Arbeit/Beobachtungen/processed` die jeweils dritte Zeile entnehmen und alle diese Zeilen in die Datei `dritte_zeilen.txt` speichern. Auf diese Datei möchte sie das Programm `calc_stats.py` laufen lassen. Annika befinde sich in `~/Bachelor_Arbeit/Beobachtungen/processed`.
#
# Geben Sie eine `for`-Schleife an, die die Datei `dritte_zeilen.txt` erstellt.
# <!--NAVIGATION-->
# | [zur Lektion](05_Shell_for_Schleife.ipynb) |
|
Verstaendnisfragen_zu_Lektion_05_for_Schleife.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/btjoshi/project-based-learning/blob/master/Sierpinski_triangle.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="KsUk2_nJp0Sf" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
# + id="cvYdDhwnqOoE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="53660c7a-e7f6-4576-a1a5-afc6ffd0fa27"
N = 10000
# initialize x and y vectors
sx = np.zeros(N)
sy = np.zeros(N)
for i in range(1,N):
#generate random number {1,2,3}
k = np.random.randint(1,4)
#update the x and y points
sx[i] = sx[i-1]/2 + k - 1
sy[i] = sy[i-1]/2
if k==2:
sy[i] += 2
plt.plot(sx, sy, 'k.', markersize=1)
plt.title('Sierpinski triangle - fractal')
plt.axis('off')
plt.show()
# + id="XW4Y7XytsQRB" colab_type="code" colab={}
|
Sierpinski_triangle.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # What you will lean
# - what is a for loop
# - how to iterate though lists using for loops
# ## What is a for loop
#
# A for loop will iterate over a spesific section of code a number of times
#
# It is best seen in action
# Let's use a for loop to count to the number 10
for i in range(10):
print(i)
# #### Note
# - Unlike matlab Python (and almost every other programming language) starts counting at 0
# - This is because comuters start addressing memory at address 0 (to start at 1 would waist space)
# - The last number in the range is exclusive
for i in range(10, 20):
print(i)
# ### Note
# - We can spesify the range we are wanting to loop for
# - In this case starting at 10 (inclusively) and ending at 20 (exclusively)
# Using range we can also incrase by counts other then 1
for i in range(0, 50, 10):
print(i)
# ### Looping though a list
#
# One of the most common things we need to do is loop though a list. Right off the bat there is a good way to do this, and a bad way.
# - This is probably what you do in Matlab
# +
fruits = ["Apples", "Bananas", "Oranges", "Pineapples"]
for i in range(len(fruits)):
print(fruits[i])
# -
# While it works this is bad form in Python. Why? Becuase the Python language is not being used to its fullest extent
# +
fruits = ["Apples", "Bananas", "Oranges", "Pineapples"]
for fruit in fruits:
print(fruit)
# -
# ### Note
# - This is easier to read
# - NEVER underestimate the importane of readable code. Think of how bad it sucks to read though a terribly written essay. Now image reading though a terribly written essay written in a language built for computers doing complex logic
#
# ## Common Things With Lists
# ### Enumerate
#
# Sometimes we need an index number while still iterating over a list. We can use enumerate to do this
# +
fruits = ["Apples", "Bananas", "Oranges", "Pineapples"]
for i, fruit in enumerate(fruits):
print(f"{fruit} is at the {i} index")
# -
# ### Note
# - we take in the varable i followed by a comma then the variable fruit
# - If we reversed these then it would print "0 is at the Apples index" and so forth
# ### Zip
#
# What if we need to iterate over two lists at the same time? Well we could use enuerate and then get the index of the other list ... BUT there is a better way. Zip
# +
fruits = ["Apples", "Bananas", "Oranges", "Pineapples"]
cars = ["Truck", "Jeep", "Sports Car", "SUV", "Sudan", "Hatch-Back", "Cross-Over"]
for fruit, car in zip(fruits, cars):
print(f"{fruit} is a type of fruit")
print(f"{car} is a type of car\n")
# -
# ### Note
# - There are more items in the list 'cars' then in the list 'fruits'
# - zip will stop iterating when all items have been iterated though in the smallest list
# - Zip can unzip any number of list
# - Notice that fruit, car is in the same order as fruits, cars
# ## Looping though a dictionary
#
# There are a few diffrent ways we can loop though a dictionary. We'll go though all of them
# ### .keys()
#
# We can create a list of all the keys in a dictionary.
# +
books = {
'<NAME>': '<NAME>',
'The Kingkiller Chronicles': '<NAME>',
'Game of Thrones': '<NAME>',
'The Lord of the Rings': '<NAME>'}
for key in books.keys():
print(key)
# -
# ### Some backaround theory
#
# books.keys() is not actally a list. It is a 'dict_keys' list
# +
books = {
'<NAME>': '<NAME>',
'The Kingkiller Chronicles': '<NAME>',
'Game of Thrones': '<NAME>',
'The Lord of the Rings': '<NAME>'}
bookKeys = books.keys()
print(type(bookKeys))
print(bookKeys)
# -
# So how can we iterate over bookKeys? Underneath the hood of varables we can see some information about then using dir()
# +
books = {
'<NAME>': '<NAME>',
'The Kingkiller Chronicles': '<NAME>',
'Game of Thrones': '<NAME>',
'The Lord of the Rings': '<NAME>'}
bookKeys = books.keys()
print(dir(bookKeys))
# -
# There is a lot of mumbo-jumbo in here you don't need to worry about. BUT notice that there is something called "_iter_",
#
# _iter_ tells python how to move to the next "thing". _iter_ has a function called __next__() that tells Python how to move to the next "thing"
#
# This is how the for loop knows how to move though the loop
#
# You cannot loop though an iterable object backwards becuase there is no __back__(), only a __next__() call
# +
# define a list
my_list = [4, 7, 0, 3]
# get an iterator using iter()
my_iter = iter(my_list)
## iterate through it using next()
#prints 4
print(my_iter.__next__())
#prints 7
print(my_iter.__next__())
## next(obj) is same as obj.__next__()
#prints 0
print(my_iter.__next__())
#prints 3
print(my_iter.__next__())
## This will raise error, no items left
next(my_iter)
# -
# ### Note
# - can can dir() any varable in python
# - This can give you some insights on useage
#
# Here is an example on a list
# +
fruits = ["Apples", "Bananas", "Oranges", "Pineapples"]
print(dir(fruits))
# -
# ### Note
# - Notice how "append", "copy", "pop", etc are listed here? ...
# #### Back to looping
#
# ## Looping by values
# +
books = {
'<NAME>': '<NAME>',
'The Kingkiller Chronicles': '<NAME>',
'Game of Thrones': '<NAME>',
'The Lord of the Rings': '<NAME>'}
for val in books.values():
print(val)
# -
# ### Looping by item
# +
books = {
'<NAME>': '<NAME>',
'The Kingkiller Chronicles': '<NAME>',
'Game of Thrones': '<NAME>',
'The Lord of the Rings': '<NAME>'}
for key, val in books.items():
print(f"{key} by {val}")
# -
# ## Breaking out of a loop
#
# If you are iterating though somehthing and what to stop before all items have been iteraged over you can use break
# +
numbers = [1,2,3,4,5,6,7,8,9,10]
# will get to 5 and then break from the loop
# so this will print 1,2,3, and 4
for num in numbers:
if num == 5:
break
print(num)
# -
# # What you need to do
#
# Data samples for the temperature are collected one a day for 100 days. Each sample is tagged with a timestamp and
# a temperature value. These data samples are stored in the below variable "data".
#
# - Somewhere between days 24 - 67 (inclusively) there was an error reading the temperature. The sensor reported a value
# far too hot to be possible. Find this value and remove it from the list.
# - We need to know how many days the temperature was over 40. Print this to the screen (omitting the bad data point).
# - What is the the average temperature over the 99 days (omitting the bad).
# ##### Time stamps
# - The timestamps are in time sence epoch form
# - This is the number of seconds that have passed from midnight on Jan 1, 1970
# - Below is a quick example of how these time stamps were generated
# - We will talk more on imports later
# - We will talk more on dealing with time later
# +
import time
print(f"{time.time()} seconds ago was New Years 1970")
# -
data = [{'Time Stamp': 1570648123.120855, 'Temp': 20}, {'Time Stamp': 1570734523.120863, 'Temp': 47},
{'Time Stamp': 1570820923.120866, 'Temp': 43}, {'Time Stamp': 1570907323.1208682, 'Temp': 39},
{'Time Stamp': 1570993723.1208708, 'Temp': 45}, {'Time Stamp': 1571080123.120873, 'Temp': 11},
{'Time Stamp': 1571166523.1208751, 'Temp': 17}, {'Time Stamp': 1571252923.120877, 'Temp': 2},
{'Time Stamp': 1571339323.1208792, 'Temp': 49}, {'Time Stamp': 1571425723.1208808, 'Temp': 16},
{'Time Stamp': 1571512123.120883, 'Temp': 27}, {'Time Stamp': 1571598523.120885, 'Temp': 43},
{'Time Stamp': 1571684923.120887, 'Temp': 44}, {'Time Stamp': 1571771323.1208892, 'Temp': 9},
{'Time Stamp': 1571857723.1208909, 'Temp': 1}, {'Time Stamp': 1571944123.120893, 'Temp': 38},
{'Time Stamp': 1572030523.120895, 'Temp': 57}, {'Time Stamp': 1572116923.120897, 'Temp': 5},
{'Time Stamp': 1572203323.120899, 'Temp': 22}, {'Time Stamp': 1572289723.120901, 'Temp': 9},
{'Time Stamp': 1572376123.120903, 'Temp': 9}, {'Time Stamp': 1572462523.120905, 'Temp': 11},
{'Time Stamp': 1572548923.120907, 'Temp': 9}, {'Time Stamp': 1572635323.120909, 'Temp': 50},
{'Time Stamp': 1572721723.1209111, 'Temp': 35}, {'Time Stamp': 1572808123.1209128, 'Temp': 49},
{'Time Stamp': 1572894523.120917, 'Temp': 27}, {'Time Stamp': 1572980923.120919, 'Temp': 58},
{'Time Stamp': 1573067323.1209211, 'Temp': 9}, {'Time Stamp': 1573153723.120925, 'Temp': 35},
{'Time Stamp': 1573240123.1209269, 'Temp': 7}, {'Time Stamp': 1573326523.120928, 'Temp': 43},
{'Time Stamp': 1573412923.12093, 'Temp': 37}, {'Time Stamp': 1573499323.120932, 'Temp': 33},
{'Time Stamp': 1573585723.1209338, 'Temp': 47}, {'Time Stamp': 1573672123.120936, 'Temp': 11},
{'Time Stamp': 1573758523.120938, 'Temp': 39}, {'Time Stamp': 1573844923.12094, 'Temp': 17},
{'Time Stamp': 1573931323.120942, 'Temp': 333}, {'Time Stamp': 1574017723.120944, 'Temp': 37},
{'Time Stamp': 1574104123.120946, 'Temp': 39}, {'Time Stamp': 1574190523.120948, 'Temp': 48},
{'Time Stamp': 1574276923.12095, 'Temp': 58}, {'Time Stamp': 1574363323.1209521, 'Temp': 0},
{'Time Stamp': 1574449723.120954, 'Temp': 29}, {'Time Stamp': 1574536123.120956, 'Temp': 26},
{'Time Stamp': 1574622523.1209579, 'Temp': 8}, {'Time Stamp': 1574708923.12096, 'Temp': 16},
{'Time Stamp': 1574795323.1209621, 'Temp': 8}, {'Time Stamp': 1574881723.120965, 'Temp': 49},
{'Time Stamp': 1574968123.1209679, 'Temp': 17}, {'Time Stamp': 1575054523.12097, 'Temp': 22},
{'Time Stamp': 1575140923.120973, 'Temp': 0}, {'Time Stamp': 1575227323.120975, 'Temp': 29},
{'Time Stamp': 1575313723.120977, 'Temp': 14}, {'Time Stamp': 1575400123.1209788, 'Temp': 17},
{'Time Stamp': 1575486523.120981, 'Temp': 45}, {'Time Stamp': 1575572923.1209831, 'Temp': 55},
{'Time Stamp': 1575659323.120985, 'Temp': 3}, {'Time Stamp': 1575745723.120987, 'Temp': 26},
{'Time Stamp': 1575832123.1209888, 'Temp': 25}, {'Time Stamp': 1575918523.120991, 'Temp': 3},
{'Time Stamp': 1576004923.1209931, 'Temp': 44}, {'Time Stamp': 1576091323.120995, 'Temp': 10},
{'Time Stamp': 1576177723.1209972, 'Temp': 49}, {'Time Stamp': 1576264123.120998, 'Temp': 46},
{'Time Stamp': 1576350523.121, 'Temp': 3}, {'Time Stamp': 1576436923.121002, 'Temp': 19},
{'Time Stamp': 1576523323.121004, 'Temp': 26}, {'Time Stamp': 1576609723.121006, 'Temp': 6},
{'Time Stamp': 1576696123.1210082, 'Temp': 23}, {'Time Stamp': 1576782523.1210098, 'Temp': 8},
{'Time Stamp': 1576868923.121012, 'Temp': 20}, {'Time Stamp': 1576955323.121015, 'Temp': 10},
{'Time Stamp': 1577041723.121017, 'Temp': 36}, {'Time Stamp': 1577128123.121021, 'Temp': 58},
{'Time Stamp': 1577214523.121023, 'Temp': 15}, {'Time Stamp': 1577300923.121025, 'Temp': 59},
{'Time Stamp': 1577387323.121026, 'Temp': 8}, {'Time Stamp': 1577473723.1210282, 'Temp': 13},
{'Time Stamp': 1577560123.1210299, 'Temp': 11}, {'Time Stamp': 1577646523.121032, 'Temp': 14},
{'Time Stamp': 1577732923.121034, 'Temp': 14}, {'Time Stamp': 1577819323.121036, 'Temp': 34},
{'Time Stamp': 1577905723.1210382, 'Temp': 35}, {'Time Stamp': 1577992123.1210399, 'Temp': 34},
{'Time Stamp': 1578078523.121042, 'Temp': 30}, {'Time Stamp': 1578164923.121044, 'Temp': 23},
{'Time Stamp': 1578251323.121046, 'Temp': 5}, {'Time Stamp': 1578337723.121048, 'Temp': 13},
{'Time Stamp': 1578424123.1210501, 'Temp': 49}, {'Time Stamp': 1578510523.1210518, 'Temp': 36},
{'Time Stamp': 1578596923.121054, 'Temp': 38}, {'Time Stamp': 1578683323.121056, 'Temp': 17},
{'Time Stamp': 1578769723.121058, 'Temp': 37}, {'Time Stamp': 1578856123.1210601, 'Temp': 23},
{'Time Stamp': 1578942523.1210618, 'Temp': 54}, {'Time Stamp': 1579028923.121063, 'Temp': 13},
{'Time Stamp': 1579115323.121068, 'Temp': 36}, {'Time Stamp': 1579201723.121071, 'Temp': 17}]
|
Python_Jupyter_Training/Week_1/.ipynb_checkpoints/7) For Loops-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %matplotlib inline
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns; sns.set();
# +
applied = {
"3M":["Dylan","Lois","Dan","Noah","Swetha","Ikenna","Melanie","Kyle","JF"],
"DogVacay":['Dan','Dylan','Ikenna','Jimmy','Natalie','Peter','Swetha'],
'Aurotech':['Ikenna','Kyle','Melanie','Noah','Peter','Cindy','Swetha'],
'tronc':['Cindy','Dan','Ikenna','JF','Jimmy','Natalie','Peter','Swetha'],
'InVenture':['Amelia','Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Kyle','Melanie','Natalie','Peter','Swetha'],
'Nielsen':['Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Josh','Kyle','Melanie','Natalie','Noah','Peter','Swetha'],
'BCG':['Cindy','Dylan','Ikenna','JF','Jimmy','Josh','Kyle','Melanie','Natalie','Noah','Peter','Swetha'],
'Facebook':['Amelia','Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Josh','Kyle','Lois','Melanie','Natalie','Noah','Peter','Swetha'],
'Netflix':['Cindy','Dylan','JF','Jimmy','Kyle','Melanie','Natalie','Peter','Swetha'],
'Virtu':['Dylan','JF','Noah','Peter'],
'Amazon':['Amelia','Cindy','Dylan','Ikenna','JF','Jimmy','Josh','Kyle','Lois','Natalie','Noah','Peter','Swetha'],
'Goodyear':['Cindy','Dan','Dylan','Ikenna','Jimmy','Kyle','Melanie','Noah','Peter'],
'HomeAway':['Cindy','Dan','Dylan','Ikenna','JF','Josh','Kyle','Lois','Melanie','Natalie','Noah','Peter'],
'Intuit':['Amelia','Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Kyle','Lois','Melanie','Natalie','Peter','Swetha'],
'iSpot.tv':['Amelia','Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Josh','Kyle','Lois','Melanie','Natalie','Noah','Peter','Swetha'],
'Payoff':['Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Kyle','Melanie','Natalie','Peter','Swetha'],
'Red Bull':['Amelia','Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Kyle','Natalie','Peter','Swetha'],
'Shopify':['Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Kyle','Lois','Melanie','Noah','Peter','Swetha'],
'Zymergen':['Amelia','Cindy','Dan','Dylan','Ikenna','JF','Jimmy','Josh','Kyle','Lois','Melanie','Natalie','Noah','Peter','Swetha'],
}
print "{} total companies in queue.".format(len(applied))
# +
fellows = ['Amelia','Cindy','Dan','Dylan','Ikenna',
'JF','Jimmy','Josh','Kyle','Lois',
'Melanie','Natalie','Noah','Peter','Swetha']
indexes = []
arr = np.zeros((len(applied),len(fellows)),dtype=int)
for i,c in enumerate(applied):
indexes.append(c)
for person in applied[c]:
arr[i,fellows.index(person)] = 1
# -
df = pd.DataFrame(data=arr,
index=indexes,
columns=fellows)
df.head()
# +
# Co-occurrence matrix is the product of the matrix and its transpose
coocc = df.T.dot(df)
coocc.head()
# -
fig,ax = plt.subplots(1,1,figsize=(14,10))
mask = np.zeros_like(coocc)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
sns.heatmap(coocc,
mask = mask,
annot=True, fmt="d",
square=True,
ax=ax)
ax.set_xlabel('Fellow',fontsize=16)
ax.set_ylabel('Fellow',fontsize=16)
ax.set_title("Applied to the same company?",fontsize=18);
# +
# People applied to unequal numbers of companies, though. Maybe I could weight by that.
allnames = []
for job in applied:
allnames.extend(applied[job])
from collections import Counter
cntr = Counter(allnames)
weights = {}
for person in cntr:
weights[person] = 1./cntr[person]
print cntr
print weights
# -
indexes = []
arr = np.zeros((len(applied),len(fellows)),dtype=float)
for i,c in enumerate(applied):
indexes.append(c)
for person in applied[c]:
arr[i,fellows.index(person)] = weights[person]
df = pd.DataFrame(data=arr,
index=indexes,
columns=fellows)
df.head()
# +
# Co-occurrence matrix is the product of the matrix and its transpose
coocc = df.T.dot(df)
coocc.head()
# -
fig,ax = plt.subplots(1,1,figsize=(14,10))
mask = np.zeros_like(coocc)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
sns.heatmap(coocc,
mask = mask,
square=True,
ax=ax,
cmap='YlGnBu')
ax.set_xlabel('Fellow',fontsize=16)
ax.set_ylabel('Fellow',fontsize=16)
ax.set_title("Weighted by total number of applications",fontsize=18);
|
notebooks/jobs_together.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Mexico Poverty Implementaton
# This was the version of the script used to search for GBDX imagery over AOIs for <NAME>'s collaboration with GOST to test the utility of high resolution satellite imagery for poverty prediction.
#
# It makes use of the same logic as the single AOI script, with a customised first section where overlapping AOIs are grouped together with a sequential 'buffer and collapse' process.
#
# If looking at searching / downloading GBDX imagery for the first time, it is recommended that you start by looking at either of the other scripts (single AOI / Multipoint) in this folder.
# ### Library installation and script setup
# This box needs to only be run once. It builds the environment to carry out the rest of the analysis.
# Run one time only - install pip and unusual Libraries
import pip
import time
import pandas as pd
import geopandas as gpd
import shapely
from shapely.wkt import loads
from shapely.geometry import MultiPolygon, MultiPoint, Polygon, box
from shapely.ops import cascaded_union, unary_union
from shapely.ops import nearest_points
import time
import json
from gbdxtools import Interface
from gbdxtools.task import env
from gbdxtools import CatalogImage
import sys, os
gbdx = Interface()
# %matplotlib inline
# ### Simplify Complex Clustered Polygon Objects
#
# This section aims to import the AOIs as described by raw shapefiles, and draw sensible bounding boxes around them
shps = []
pth = r'C:\Users\charl\Documents\GOST\LAC floods\RE__Fathom_global_flood_data_use'
for f in ['RC_CABA.shp','RC_Cordoba_Capital.shp','RC_Jujuy_Capital.shp','RC_RegionMetropolitanaBA.shp','RC_Resistencia.shp','RC_SantaFe_Capital.shp']:
gdf = gpd.read_file(os.path.join(pth,f))
shp = unary_union(gdf.geometry)
shps.append(shp)
shape = gpd.GeoDataFrame({'geometry':shps}, geometry = 'geometry', crs = {'init':'epsg:4326'})
# +
### rawAOI = r'agebs_val_muni.shp'
crs = {'init': 'epsg:4326'}
bufw = 0.015
# Define conversion function - objects to list of bounding boxes
def BoundingBoxList(MultiPolygonObj):
boxlist = []
for obj in MultiPolygonObj:
coords = [n for n in obj.bounds]
bbox = box(coords[0],coords[1],coords[2],coords[3])
boxlist.append(bbox)
return boxlist
polygons = MultiPolygon(shape['geometry'].loc[i] for i in shape.index)
exterior = cascaded_union(polygons)
exterior_boxxs = BoundingBoxList(exterior)
# Scientific Buffer Setting based on nearest neighbour median
dff = pd.DataFrame({'exterior': exterior})
dff['ext.centroid'] = dff['exterior'].apply(lambda x: x.centroid)
def func(x):
m = dff.loc[dff['ext.centroid'] != x]
l = MultiPoint(m['ext.centroid'].tolist())
n = nearest_points(x, l)
return x.distance(n[0])
i += 1
dff['nn_distance'] = dff['ext.centroid'].apply(lambda x: func(x))
bufw = dff['nn_distance'].median()
print 'Buffer width set as %f' % bufw
# Group nearby AOIs
tight_bbox = MultiPolygon(exterior_boxxs)
reduced_boxes = cascaded_union(tight_bbox.buffer(bufw))
rboxxs = BoundingBoxList(reduced_boxes)
final_boxes = cascaded_union(MultiPolygon(rboxxs))
fboxxs = BoundingBoxList(final_boxes.buffer(-bufw))
pd.DataFrame({'AOI_geometry':fboxxs}).to_csv(os.path.join(pth, 'AOI_Collection.csv'))
print 'useful area of tight bbox: %d percent' % (exterior.area / tight_bbox.area * 100)
print 'useful area of reduced bbox: %d percent' % (exterior.area / reduced_boxes.area * 100)
print 'useful area of final bbox: %d percent' % (exterior.area / final_boxes.area * 100)
print 'number of AOIs: %d' % len(fboxxs)
# -
# ### Define the Search Parameters
# +
# Define categorical search parameters
cutoff_cloud_cover = 25 # images with CC over this threshold discarded
cutoff_overlap = 0 # images with AOI overlap below this threshold discarded. [N.b.: keep small if AOI large.]
cutoff_date_low = '1-Jan-13' # images older than this date discarded
cutoff_date_high = '1-Jan-16' # images newer than this date discarded
cutoff_nadir = 25 # Images at nadir angles greater than threshold discarded
cutoff_pan_res = 1 # Images below this resolution discarded
accepted_bands = ['PAN_MS1','PAN_MS1_MS2'] # Images with any other band entry discarded
# Define continuous image ranking preferences
optimal_date = '1-Jul-14' # Optimal date (enter as dd-mmm-yy)
optimal_pan_res = 0.4 # Optimal pan resolution, metres
optimal_nadir = 0 # optimal image angle. 0 = vertical
# Define continuous image ranking preference weights. Must sum to 1.
# If user cares more about scenes being contemporaneous, up 'date' weighting at expense of other categories.
pref_weights = {
'cloud_cover': 0.4,
'overlap':0.25,
'date': 0.25,
'nadir': 0.1,
'resolution': 0.0
}
# -
# ### Define Charles Rocks II Process
# +
# %matplotlib inline
def Process(AOI,
cutoff_cloud_cover,
cutoff_overlap,
cutoff_date_low,
cutoff_date_high,
cutoff_nadir,
cutoff_pan_res,
accepted_bands,
optimal_date,
optimal_pan_res,
optimal_nadir,
pref_weights,
AOI_counter
):
# Define bbox object
bbox = [AOI.bounds[i] for i in range(0,len(AOI.bounds))]
# Define search function. Returns up to 1000 images where cloud cover smaller than 25%
def search_unordered(bbox, _type, count=1000, cloud_cover=25):
aoi = AOI.wkt
query = "item_type:{} AND item_type:DigitalGlobeAcquisition".format(_type)
query += " AND attributes.cloudCover_int:<{}".format(cloud_cover)
return gbdx.vectors.query(aoi, query, count=count)
# Run search on Area of Interest (AOI). Passes in AOI in Well Known Text format (wkt)
records = search_unordered(AOI.wkt, 'DigitalGlobeAcquisition')
# Create list object of all catalog IDs returned in search
ids = [r['properties']['attributes']['catalogID'] for r in records]
# Define Counters
l = 0 # number of non-IDAHO images
scenes = [] # list containing metadata dictionaries of all scenes in our AOI
# Toggle for printing images to screen
download_thumbnails = 0
# Loop catalog IDs
for i in ids:
# Fetch metadata dictionary for each catalog ID in ids list
r = gbdx.catalog.get(i)
# Check location of ID - is it in IDAHO?
try:
location = gbdx.catalog.get_data_location(i)
except:
location == 'not_delivered'
# Defines IDAHO variable as binary 1 / 0 depending on whether it is in IDAHO already or not
if location == 'not_delivered':
l = l + 1
idaho = 0
else:
idaho = 1
# Download image if image in IDAHO and toggle on
if download_thumbnails == 1:
image = CatalogImage(i, band_type="MS", bbox=bboxx)
image.plot(w=10, h=10)
else:
pass
# Calculate the percentage overlap with our AOI for each scene
# load as a Shapely object the wkt representation of the scene footprint
footprint = r['properties']['footprintWkt']
shapely_footprint = shapely.wkt.loads(footprint)
# Calculate the object that represents the difference between the AOI and the scene footprint
AA = AOI.difference(shapely_footprint)
# Define frac as the fraction, between 0 and 1, of the AOI that the scene covers
frac = 1 - ((AA).area / AOI.area)
# Create BB - the proxy for the useful area. IF scene entirely contains AOI, then BB = AOI, else it is the intersection
# of the scene footprint and the AOI
BB = AOI
if frac < 1:
BB = AOI - AA
# Similarly, AA, the difference area between AOI and the scene, can be set to null if the scene contains 100% of the AOI
if frac == 1:
AA = ""
# Append key metadata to list obejct 'scenes' for the current scene, as a dictionary. This then moves into the pandas dataframe.
# Several objects here are from DigitalGlobe's metadata dictionary (anything with an r start)
scenes.append({
'ID':i,
'TimeStamp':r['properties']['timestamp'],
'CloudCover':r['properties']['cloudCover'],
'ImageBands':r['properties']['imageBands'],
'On_IDAHO':idaho,
'browseURL': r['properties']['browseURL'],
'Overlap_%': frac * 100,
'PanResolution': r['properties']['panResolution'],
'MultiResolution': r['properties']['multiResolution'],
'OffNadirAngle': r['properties']['offNadirAngle'],
'Sensor':r['properties']['sensorPlatformName'],
'Full_scene_WKT':r['properties']['footprintWkt'],
'missing_area_WKT':AA,
'useful_area_WKT':BB
})
# Define column order for dataframe of search results
cols = ['ID','Sensor','ImageBands','TimeStamp','CloudCover','Overlap_%','PanResolution','MultiResolution','OffNadirAngle','On_IDAHO','browseURL','Full_scene_WKT','useful_area_WKT','missing_area_WKT']
#Generate pandas dataframe from results
out = pd.DataFrame(scenes,columns = cols)
# Convert Timestamp field to pandas DateTime object
out['TS'] = out['TimeStamp'].apply(lambda x: pd.Timestamp(x))
# Add separate date and time columns for easy interpretation
string = out['TimeStamp'].str.split('T')
out['Date'] = string.str.get(0)
out['Time'] = string.str.get(1)
# Categorical Search: remove disqualified images. Copy of dataframe taken, renamed to 'out_1stcut'.
out_1stcut = out.loc[(out['CloudCover'] <= cutoff_cloud_cover) &
(out['Overlap_%'] >= cutoff_overlap) &
(out['TS'] > pd.Timestamp(cutoff_date_low)) &
(out['TS'] < pd.Timestamp(cutoff_date_high)) &
(out['ImageBands'].isin(accepted_bands)) &
(out['OffNadirAngle'] <= cutoff_nadir) &
(out['PanResolution'] <= cutoff_pan_res)
]
# Apply ranking method over all non-disqualified search results for each field
optimal_date = pd.to_datetime(optimal_date, utc = True)
# each 1% of cloud cover = 1 point
out_1stcut['points_CC'] = (out_1stcut['CloudCover'])
# each 1% of overlap missed = 1 point
out_1stcut['points_Overlap'] = (100 - out_1stcut['Overlap_%'])
# each week away from the optimal date = 1 point
out_1stcut['points_Date'] = ((abs(out_1stcut['TS'] - optimal_date)).view('int64') / 60 / 60 / 24 / 1E9) / 7
# each degree off nadir = 1 point
out_1stcut['points_Nadir'] = abs(out_1stcut['OffNadirAngle'] - optimal_nadir)
# each cm of resolution worse than the optimal resolution = 1 point
out_1stcut['points_Res'] = (out_1stcut['PanResolution'] - optimal_pan_res).apply(lambda x: max(x,0)) * 100
# Define ranking algorithm - weight point components defined above by the preference weighting dictionary
def Ranker(out_1stcut, pref_weights):
a = out_1stcut['points_CC'] * pref_weights['cloud_cover']
b = out_1stcut['points_Overlap'] * pref_weights['overlap']
c = out_1stcut['points_Date'] * pref_weights['date']
d = out_1stcut['points_Nadir'] * pref_weights['nadir']
e = out_1stcut['points_Res'] * pref_weights['resolution']
# Score is linear addition of the number of 'points' the scene wins as defined above. More points = worse fit to criteria
rank = a + b + c + d + e
return rank
# Add new column - Rank Result - with the total number of points accrued by the scene
out_1stcut['RankResult'] = Ranker(out_1stcut, pref_weights)
# Add a Preference order column - Pref_Order - based on Rank Result, sorted ascending (best scene first)
out_1stcut = out_1stcut.sort_values(by = 'RankResult', axis = 0, ascending = True)
out_1stcut = out_1stcut.reset_index()
out_1stcut['Pref_order'] = out_1stcut.index + 1
out_1stcut = out_1stcut.drop(['index'], axis = 1)
cols = ['ID','Sensor','ImageBands','Date','Time','CloudCover','Overlap_%','PanResolution','MultiResolution','OffNadirAngle','On_IDAHO','Pref_order','RankResult','points_CC','points_Overlap','points_Date','points_Nadir','points_Res','browseURL','Full_scene_WKT','useful_area_WKT','missing_area_WKT']
out_1stcut = out_1stcut[cols]
# Create a new copy of the dataframe to work on
finaldf = out_1stcut
# Add column for used scene region area, expressed as .wkt
finaldf['used_scene_region_WKT'] = 0
finaldf['used_area'] = 0
# Set initial value of AOI_remaining to the full AOI under consideration
AOI_remaining = AOI
# Create two lists - usedareas for the areas of scenes used in the final product, and AOI_rems to record sequential reduction in
# remaining AOI that needs to be filled
usedareas = []
AOI_rems = []
# Set up loop for each image in dataframe of ranked images
for s in finaldf.index:
if AOI_remaining.area < (AOI.area / 100):
pass
else:
# pick up the WKT of the useful area as the useful_scene_region variable
useful_scene_region = finaldf['useful_area_WKT'].loc[s]
# Set up try loop - to catch if there is no intersection of AOI_remaining and useful_scene_region
#try
# define 'used_scene_region' as the useable bit of the image that overlaps the AOI
used_scene_region = AOI_remaining.intersection(useful_scene_region)
# calculate the area of that region
used_area = used_scene_region.area
# Check to see if this is a geometry collection. This shapely type if for 'jumbles' of outputs (e.g. Polygons + Lines)
# This can be created if the intersection process decides that it also wants a 1-pixel strip from the bottom of the image
# as well as the main chunk. This won't translate back to a shapefile, so we drop non-Polygon objects iteratively.
if used_scene_region.type == 'GeometryCollection':
xlist = []
# Iterate through all objects in the geometry collection
for y in used_scene_region.geoms:
# Add polygons to a fresh list
if y.type == 'Polygon':
xlist.append(y)
# Convert that list to a multipolygon object
used_scene_region = MultiPolygon(xlist)
# Append the used bit of the image to the usedareas list.
usedareas.append(used_scene_region)
try:
# Add two new columns to the dataframe - the used scene geometry in wkt, and the area of the used scene
finaldf['used_scene_region_WKT'].loc[s] = used_scene_region
finaldf['used_area'].loc[s] = used_area
except:
pass
# Redefine the area of the AOI that needs to be filled by the next, lower-rank image
AOI_remaining = AOI_remaining.difference(used_scene_region)
# Add this to the AOI_rems list for troubelshooting and verification
AOI_rems.append(AOI_remaining)
print '\t...after image %s, %d percent remaining' % (s+1, (AOI_remaining.area/AOI.area*100))
# Drop from the scene list any scene where the area used is less than 1% of the AOI
finaldf = finaldf.loc[finaldf['used_area'] > (AOI.area / 100)]
# Print summary statistics to consol
print 'AOI %s Complete. Proportion of AOI covered: %d percent' % (AOI_counter, (finaldf['used_area'].sum() / AOI.area * 100))
#Add counter
finaldf['AOI_counter'] = AOI_counter
return finaldf
# -
# ### Run Process
#
AOI_counter = 1
list_of_dfs = []
print 'Beginning image identification process. Standby.'
for AOI in fboxxs:
time.sleep(1)
output = Process(
AOI,
cutoff_cloud_cover,
cutoff_overlap,
cutoff_date_low,
cutoff_date_high,
cutoff_nadir,
cutoff_pan_res,
accepted_bands,
optimal_date,
optimal_pan_res,
optimal_nadir,
pref_weights,
AOI_counter
)
list_of_dfs.append(output)
AOI_counter += 1
finaldf = pd.concat(list_of_dfs)
finaldf.to_csv('Scene_List.csv')
print 'Process complete'
# ### Run Intersection Check
#
# As the GBDX search function takes bounding boxes, we passed all AOIs to it in box format. It is possible that the above process filled in the box with images, AND that some of those images don't intersect any part of the true AOI. Hence, we remove any images from the ordering list if less than 2% of the used footprint intersects with areas we are interested in.
# +
finaldf = pd.read_csv('Scene_List.csv')
print 'length before: %s' % len(finaldf)
def check(x):
if exterior.intersection(x.buffer(0)).area > x.area/50:
return 1
else:
return 0
finaldf['drop'] = finaldf['used_scene_region_WKT'].map(shapely.wkt.loads).apply(lambda x: check(x))
finaldf = finaldf.loc[finaldf['drop'] == 1]
print 'lenght after check %s' % len(finaldf)
# -
# ### Calculate Area Coverage
#
# This looks at the original shapes - and how far the imagery found covers that area
def area(x):
return exterior.intersection(x.buffer(0)).area
finaldf['AOI_coverage_area'] = finaldf['used_scene_region_WKT'].map(shapely.wkt.loads).apply(lambda x: area(x))
# ### Final Statistical Print and File Save
# +
# Mexico ITRF2008 / LCC
crs_targ = {'init': 'epsg:6372'}
def AreaCalc(obj, crs, crs_targ):
df = gpd.GeoDataFrame(range(0, len(obj)), crs = crs, geometry = obj)
df = df.to_crs(crs_targ)
df['area'] = df.area / 1E6
return df['area'].sum()
mehico = gpd.read_file(mex)
area_of_Mexico = AreaCalc(mehico['geometry'], crs, crs_targ)
area_of_AOIs = AreaCalc(shape['geometry'].tolist(), crs, crs_targ)
area_of_final_bboxs = AreaCalc(fboxxs, crs, crs_targ)
area_of_imagery_used = AreaCalc(finaldf['used_scene_region_WKT'].map(shapely.wkt.loads).tolist(), crs, crs_targ)
unique_images = len(finaldf['ID'].unique())
unique_images_on_idaho = len(finaldf.loc[finaldf['On_IDAHO'] == 1].groupby('ID'))
to_be_ordered = unique_images - unique_images_on_idaho
print 'Area of mexico: %d square kilometres.' % area_of_Mexico
print 'Area of AOIs: %d square kilometres.' % area_of_AOIs
print 'Anticipated area of compute (bounding boxes for AOIs): %d square kilometres' % area_of_final_bboxs
print 'Area of bounding boxes filled by imagery: %d square kilometres' % area_of_imagery_used
print 'As a pecentage of Mexico, AOIs = %f percent, imagery area = %f percent' % ((area_of_AOIs*100 / area_of_Mexico), (area_of_final_bboxs*100 / area_of_Mexico))
print 'Percentage coverage of target bounding boxes = %f percent' % (area_of_imagery_used*100 / area_of_final_bboxs)
print 'Percentage coverage of actual AOIs = %f percent' % (finaldf['AOI_coverage_area'].sum()*100 / exterior.area)
print 'Total images used: %d' % unique_images
print 'Images on IDAHO already: %d' % unique_images_on_idaho
print 'Images that need to be ordered: %d' % to_be_ordered
finaldf.to_csv('Final_Scene_List.csv')
# -
order_df = finaldf.drop_duplicates('ID')
order_df.to_csv('Unique IDs.csv')
order_df = order_df.loc[order_df['On_IDAHO'] == 0]
order_list = order_df['ID'].tolist()
# ### Order Imagery
#
# Use this cell block to order up to IDAHO all imagery in ther 'order_list' variable
# +
order_receipts = []
print 'Number of images to be ordered: %d' % len(order_list)
consent = 'I agree to ordering these image IDs to IDAHO'
if consent == 'I agree to ordering these image IDs to IDAHO':
for x in order_list:
order_id = gbdx.ordering.order(x)
order_receipts.append(order_id)
else:
print 'please write out your consent in the consent variable above'
# -
# ### Check Ordering Status
#
# Use this code block to check whether the images have yet been ordered up to IDAHO
for receipt in order_receipts[:10]:
print gbdx.ordering.status(receipt)
# +
import matplotlib.pyplot as plt
import numpy as np
AREA_TEST = []
AOI_NUMBER = []
def BoundingBoxList(MultiPolygonObj):
boxlist = []
for obj in MultiPolygonObj:
coords = [n for n in obj.bounds]
bbox = box(coords[0],coords[1],coords[2],coords[3])
boxlist.append(bbox)
return boxlist
for i in range(1, 21):
rawAOI = r'agebs_val_muni.shp'
crs = {'init': 'epsg:4326'}
shape = gpd.read_file(rawAOI)
bufw = float(i / 100.0)
polygons = MultiPolygon(shape['geometry'].loc[i] for i in shape.index)
exterior = cascaded_union(polygons)
exterior_boxxs = BoundingBoxList(exterior)
tight_bbox = MultiPolygon(exterior_boxxs)
reduced_boxes = cascaded_union(tight_bbox.buffer(bufw))
rboxxs = BoundingBoxList(reduced_boxes)
final_boxes = cascaded_union(MultiPolygon(rboxxs))
fboxxs = BoundingBoxList(final_boxes.buffer(-bufw))
AREA_TEST.append(final_boxes.area)
AOI_NUMBER.append(len(fboxxs))
X = range(1,21)
X[:] = [x / 100.0 for x in X]
plt.plot(X, AREA_TEST, color = 'blue')
plt.plot(X, AOI_NUMBER, color = 'green')
plt.xlabel('buf_width')
plt.ylabel('Value')
plt.show()
# -
|
Implementations/FY20/SAT_GBDX - Satelite Imagery Search/ImageSearch_Mexico_DNewhouse.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.1
# language: julia
# name: julia-1.6
# ---
input = readline("input")
target_x = [119, 176];
target_y = [-141, -84];
# +
# use dummy example
#target_x = [20, 30];
#target_y = [-10, -5];
# -
pos = (0,0);
function simulate_trajectory(start_pos, start_velocity, steps = 100)
pos = start_pos
vel = start_velocity
trajectory = [pos]
for i in 1:steps
pos = (pos[1]+vel[1], pos[2]+vel[2])
vel = (vel[1]-sign(vel[1]), vel[2]-1)
push!(trajectory, pos)
end
return trajectory
end
simulate_trajectory(pos, (6,3))
in_range((x,y)) = x >= target_x[1] && x <= target_x[2] && y >= target_y[1] && y <= target_y[2]
any_in_range(trajectory) = any(in_range.(trajectory))
any_in_range(simulate_trajectory(pos, (6,3)))
highest_point(trajectory) = maximum(getindex.(trajectory, 2))
highest_point(simulate_trajectory(pos, (6,3)))
# Stupid burte-force solution over a reasonable velocity range. I had to increase the step size. The highest missiles take more than 100 (even more than 1000) steps.
hits = Dict()
for xvel in 1:50
for yvel in 1:500
t = simulate_trajectory(pos, (xvel,yvel),10000)
if any_in_range(t)
hits[(xvel,yvel)] = highest_point(t)
end
end
end
hits
maximum(values(hits))
# ## Part II
# This is not an elegant solution but I don't care. I need to allow negative initialy y velocities as well (and initial x velocities up to 176)
upward_hits = hits;
other_hits = Dict()
for xvel in 1:200
for yvel in 0:-1:-200
t = simulate_trajectory(pos, (xvel,yvel),100)
if any_in_range(t)
other_hits[(xvel,yvel)] = highest_point(t)
end
end
end
other_hits
length(upward_hits) + length(other_hits)
|
17/17.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Classes for callback implementors
# + hide_input=true
from fastai.gen_doc.nbdoc import *
from fastai.callback import *
from fastai import *
# -
# fastai provides a powerful *callback* system, which is documented on the [`callbacks`](/callbacks.html#callbacks) page; look on that page if you're just looking for how to use existing callbacks. If you want to create your own, you'll need to use the classes discussed below.
#
# A key motivation for the callback system is that additional functionality can be entirely implemented in a single callback, so that it's easily read. By using this trick, we will have different methods categorized in different callbacks where we will find clearly stated all the interventions the method makes in training. For instance in the [`LRFinder`](/callbacks.lr_finder.html#LRFinder) callback, on top of running the fit function with exponentially growing LRs, it needs to handle some preparation and clean-up, and all this code can be in the same callback so we know exactly what it is doing and where to look if we need to change something.
#
# In addition, it allows our [`fit`](/basic_train.html#fit) function to be very clean and simple, yet still easily extended. So far in implementing a number of recent papers, we haven't yet come across any situation where we had to modify our training loop source code - we've been able to use callbacks every time.
# + hide_input=true
show_doc(Callback)
# -
# To create a new type of callback, you'll need to inherit from this class, and implement one or more methods as required for your purposes. Perhaps the easiest way to get started is to look at the source code for some of the pre-defined fastai callbacks. You might be surprised at how simple they are! For instance, here is the **entire** source code for [`GradientClipping`](/train.html#GradientClipping):
#
# ```python
# @dataclass
# class GradientClipping(LearnerCallback):
# clip:float
# def on_backward_end(self, **kwargs):
# if self.clip:
# nn.utils.clip_grad_norm_(self.learn.model.parameters(), self.clip)
# ```
# You generally want your custom callback constructor to take a [`Learner`](/basic_train.html#Learner) parameter, e.g.:
#
# ```python
# @dataclass
# class MyCallback(Callback):
# learn:Learner
# ```
#
# Note that this allows the callback user to just pass your callback name to `callback_fns` when constructing their [`Learner`](/basic_train.html#Learner), since that always passes `self` when constructing callbacks from `callback_fns`. In addition, by passing the learner, this callback will have access to everything: e.g all the inputs/outputs as they are calculated, the losses, and also the data loaders, the optimizer, etc. At any time:
# - Changing self.learn.data.train_dl or self.data.valid_dl will change them inside the fit function (we just need to pass the [`DataBunch`](/basic_data.html#DataBunch) object to the fit function and not data.train_dl/data.valid_dl)
# - Changing self.learn.opt.opt (We have an [`OptimWrapper`](/callback.html#OptimWrapper) on top of the actual optimizer) will change it inside the fit function.
# - Changing self.learn.data or self.learn.opt directly WILL NOT change the data or the optimizer inside the fit function.
# In any of the callbacks you can unpack in the kwargs:
# - `n_epochs`, contains the number of epochs the training will take in total
# - `epoch`, contains the number of the current
# - `iteration`, contains the number of iterations done since the beginning of training
# - `num_batch`, contains the number of the batch we're at in the dataloader
# - `last_input`, contains the last input that got through the model (eventually updated by a callback)
# - `last_target`, contains the last target that gor through the model (eventually updated by a callback)
# - `last_output`, contains the last output spitted by the model (eventually updated by a callback)
# - `last_loss`, contains the last loss computed (eventually updated by a callback)
# - `smooth_loss`, contains the smoothed version of the loss
# - `last_metrics`, contains the last validation loss and emtrics computed
# - `pbar`, the progress bar
# ### Methods your subclass can implement
# All of these methods are optional; your subclass can handle as many or as few as you require.
# + hide_input=true
show_doc(Callback.on_train_begin)
# -
# Here we can initiliaze anything we need.
# The optimizer has now been initialized. We can change any hyper-parameters by typing, for instance:
#
# ```
# self.opt.lr = new_lr
# self.opt.mom = new_mom
# self.opt.wd = new_wd
# self.opt.beta = new_beta
# ```
# + hide_input=true
show_doc(Callback.on_epoch_begin)
# -
# This is not technically required since we have `on_train_begin` for epoch 0 and `on_epoch_end` for all the other epochs,
# yet it makes writing code that needs to be done at the beginning of every epoch easy and more readable.
# + hide_input=true
show_doc(Callback.on_batch_begin)
# -
# Here is the perfect place to prepare everything before the model is called.
# Example: change the values of the hyperparameters (if we don't do it on_batch_end instead)
#
# If we return something, that will be the new value for `xb`,`yb`.
# + hide_input=true
show_doc(Callback.on_loss_begin)
# -
# Here is the place to run some code that needs to be executed after the output has been computed but before the
# loss computation.
# Example: putting the output back in FP32 when training in mixed precision.
#
# If we return something, that will be the new value for the output.
# + hide_input=true
show_doc(Callback.on_backward_begin)
# -
# Here is the place to run some code that needs to be executed after the loss has been computed but before the gradient computation.
# Example: `reg_fn` in RNNs.
#
# If we return something, that will be the new value for loss. Since the recorder is always called first,
# it will have the raw loss.
# + hide_input=true
show_doc(Callback.on_backward_end)
# -
# Here is the place to run some code that needs to be executed after the gradients have been computed but
# before the optimizer is called.
# + hide_input=true
show_doc(Callback.on_step_end)
# -
# Here is the place to run some code that needs to be executed after the optimizer step but before the gradients
# are zeroed
# + hide_input=true
show_doc(Callback.on_batch_end)
# -
# Here is the place to run some code that needs to be executed after a batch is fully done.
# Example: change the values of the hyperparameters (if we don't do it on_batch_begin instead)
#
# If we return true, the current epoch is interrupted (example: lr_finder stops the training when the loss explodes)
# + hide_input=true
show_doc(Callback.on_epoch_end)
# -
# Here is the place to run some code that needs to be executed at the end of an epoch.
# Example: Save the model if we have a new best validation loss/metric.
#
# If we return true, the training stops (example: early stopping)
# + hide_input=true
show_doc(Callback.on_train_end)
# -
# Here is the place to tidy everything. It's always executed even if there was an error during the training loop,
# and has an extra kwarg named exception to check if there was an exception or not.
# Examples: save log_files, load best model found during training
# ## Annealing functions
# The following functions provide different annealing schedules. You probably won't need to call them directly, but would instead use them as part of a callback. Here's what each one looks like:
# + hide_input=true
annealings = "NO LINEAR COS EXP POLY".split()
fns = [annealing_no, annealing_linear, annealing_cos, annealing_exp, annealing_poly(0.8)]
for fn, t in zip(fns, annealings):
plt.plot(np.arange(0, 100), [fn(2, 1e-2, o)
for o in np.linspace(0.01,1,100)], label=t)
plt.legend();
# + hide_input=true
show_doc(annealing_cos)
# + hide_input=true
show_doc(annealing_exp)
# + hide_input=true
show_doc(annealing_linear)
# + hide_input=true
show_doc(annealing_no)
# + hide_input=true
show_doc(annealing_poly)
# + hide_input=true
show_doc(CallbackHandler)
# -
# You probably won't need to use this class yourself. It's used by fastai to combine all the callbacks together and call any relevant callback functions for each training stage. The methods below simply call the equivalent method in each callback function in [`self.callbacks`](/callbacks.html#callbacks).
# + hide_input=true
show_doc(CallbackHandler.on_backward_begin)
# + hide_input=true
show_doc(CallbackHandler.on_backward_end)
# + hide_input=true
show_doc(CallbackHandler.on_batch_begin)
# + hide_input=true
show_doc(CallbackHandler.on_batch_end)
# + hide_input=true
show_doc(CallbackHandler.on_epoch_begin)
# + hide_input=true
show_doc(CallbackHandler.on_epoch_end)
# + hide_input=true
show_doc(CallbackHandler.on_loss_begin)
# + hide_input=true
show_doc(CallbackHandler.on_step_end)
# + hide_input=true
show_doc(CallbackHandler.on_train_begin)
# + hide_input=true
show_doc(CallbackHandler.on_train_end)
# + hide_input=true
show_doc(OptimWrapper)
# -
# This is a convenience class that provides a consistent API for getting and setting optimizer hyperparameters. For instance, for [`optim.Adam`](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) the momentum parameter is actually `betas[0]`, whereas for [`optim.SGD`](https://pytorch.org/docs/stable/optim.html#torch.optim.SGD) it's simply `momentum`. As another example, the details of handling weight decay depend on whether you are using `true_wd` or the traditional L2 regularization approach.
#
# This class also handles setting different WD and LR for each layer group, for discriminative layer training.
# + hide_input=true
show_doc(OptimWrapper.create)
# + hide_input=true
show_doc(OptimWrapper.read_defaults)
# + hide_input=true
show_doc(OptimWrapper.read_val)
# + hide_input=true
show_doc(OptimWrapper.set_val)
# + hide_input=true
show_doc(OptimWrapper.step)
# + hide_input=true
show_doc(OptimWrapper.zero_grad)
# + hide_input=true
show_doc(SmoothenValue)
# -
# Used for smoothing loss in [`Recorder`](/basic_train.html#Recorder).
# + hide_input=true
show_doc(SmoothenValue.add_value)
# + hide_input=true
show_doc(Stepper)
# -
# Used for creating annealing schedules, mainly for [`OneCycleScheduler`](/callbacks.one_cycle.html#OneCycleScheduler).
# + hide_input=true
show_doc(Stepper.step)
# + hide_input=true
show_doc(AverageMetric)
# -
# See the documentation on [`metrics`](/metrics.html#metrics) for more information.
# ## Undocumented Methods - Methods moved below this line will intentionally be hidden
# + hide_input=true
show_doc(do_annealing_poly)
# -
# ## New Methods - Please document or move to the undocumented section
# + hide_input=true
show_doc(AverageMetric.on_epoch_begin)
# -
#
# + hide_input=true
show_doc(AverageMetric.on_batch_end)
# -
#
# + hide_input=true
show_doc(AverageMetric.on_epoch_end)
# -
#
|
docs_src/callback.ipynb
|
# + [markdown] colab_type="text" id="6TuWv0Y0sY8n"
# # Getting Started in TensorFlow
# ## A look at a very simple neural network in TensorFlow
# + [markdown] colab_type="text" id="u9J5e2mQsYsQ"
# This is an introduction to working with TensorFlow. It works through an example of a very simple neural network, walking through the steps of setting up the input, adding operators, setting up gradient descent, and running the computation graph.
#
# This tutorial presumes some familiarity with the TensorFlow computational model, which is introduced in the [Hello, TensorFlow](../notebooks/1_hello_tensorflow.ipynb) notebook, also available in this bundle.
# + [markdown] colab_type="text" id="Dr2Sv0vD8rT-"
# ## A simple neural network
#
# Let's start with code. We're going to construct a very simple neural network computing a linear regression between two variables, y and x. The function it tries to compute is the best $w_1$ and $w_2$ it can find for the function $y = w_2 x + w_1$ for the data. The data we're going to give it is toy data, linear perturbed with random noise.
#
# This is what the network looks like:
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 681, "status": "ok", "timestamp": 1474671827305, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="q09my4JYtKXw" outputId="4938066b-231d-4078-e2dd-fd223eca7c9f"
from __future__ import print_function
from IPython.display import Image
import base64
Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAJYAAABkCAYAAABkW8nwAAAO90lEQVR4Xu2dT5Dc1J3Hv+YQT8VJZUhVdprLWs4FTSrGGv4ql9CuHBCH4GaTFCLZwnIcjOAy8l6Q/1SlU4XHcg6xJgtY2OOik2KxSGoTGWrXzYFC2T2MDAtWitRavmQ0e9k2SYGowom4hNRPtqA9TE+rW3/cPfPepcfup6f3fu/Tv9/T+/PVpo8//vhjsMQsULAFNjGwCrYoKy6xAAOLgVCKBRhYpZiVFcrAYgyUYgEGVilmZYUysBgDpViAgVWKWVmhDCzGQCkWGEuwrly5gtf++zW887/vYOn/lnD5T5cT40x9ZQrb/nEbxDtFiHeI2LJlSylGY4X2t8BYgUVAvfzqy3i5/TI+vPLhmq37wpYv4AHpATxw3wMMsP4cFJ5jbMAiqA4eOYg/Lv8xMcL26e34+vTXk8+vbv1q8n/03TsX38EfLv4h+aRE380dmmNwFY7O2gWOBVgE1Y/2/yjxUls+vwXaY1oS7tZK3v94MJ8zceUvV0Dea+H4AoOrQrhGHqxuT0Xjp0P7D2HqH6Yymejyu5dx5PiRZBxGnmt+bj7TdSxTfgv0ASuAzglwmyE8pfbZu3VaEDkDdT+AweevzGolvPjvL+LMb84knmr+yHxmqNKyCK7ZQ7OJ5yIo+3m6clqx8UrNB1bso2W64FQN9cnijdcdAvNAQWGRPBcLicX3Ua8S84FVcj3PnjuLhRcWkgH63OG5XHc7+NTBZEBP47NvffNbucpiF/e3QCaw2g0NfNvES5c+wtQ9u2G0LCj8BLAiFEaeBU0zYJ9fxkfYjKl7FZgtCzIHIA7QUmXov/g9LmMztt6rwLBMyFROj3TkZ0fgveXh4X96GN//zvf7t2aNHGlI7VlW0pYmRC+AKUwAsQu5thOuvIjQEjGBGJ7CQYptdOw6etc6VzXXzcUZwJrGseWt2P28DV2I4OgyDgQKFgMTYtQ1xqq10eDuR6j8Fi1NxGTkwpAfRos7h05bQscQIFgibEeHMBHCVhs4EBtY8lQQd6ulvbN78e6f302mC7Z/bXsuo9NkKk1X9PZ+IUyeR0sN4GscYl8DPzOP5VuPYynQwMU+dL4O3wzRbpQQ93O1bvQuzgRWS0p/tQA6Nuqcilq7A5u3Px28T7qw7BB1VUHqhEKTB2+pCAIVHZVD3dPgujpE6peOBzesQRS5nr/+b//g24nF7JN27qkCGq/J++RknHXm5JlVeiKGr/MQPQMdV0ZkCRBbNUwEMYzQhRyZEHgHOv29ynPM6HXtja1Rf7B4AZ7RgZv+SuMAOj+NtrYEX3avfyqMfDi2DdcLEAQBvPOX8MGtR3Ex0MEFJiRxP373wWZsvaeBhixDVRrg1/jxlwEWPV3ap+xVrR57Cjgpht2xEDV4mLIFvqkiaoUwwzp4U4Hv9/awN7YrR+vuGcAS4ZsdtKV0VNEFVqMLrIkWJGEPPP4hKA0RgiCAc1XsdJQErGQ2Ig7hOQ5sx4Hz0u+wvHX2akjtMWCpNhQCiCicq+AcCx1Fh9B2IegcNN6B4Teg1z0EeknzKqPFRe7a9AeLm4ajXvzUoJEDqUahMESrKxSqbQHbDBGLoXUNlBiuUsNOT8fFQEVsNdHmdOjStTgSGOCnLTQuBDBosLxKqnTwntw/glPnoHMS4E6iFVjgbBGcwUGMPAjtawP73GZf/wVkAutYtAvPezYUPoKjipBdGZ5vQOgavGteHbfsiXD09TZUIUbg6JD3vITlrU/iYthErPOYaQk44ZhocDF8U0HDqsEOHfQaC7/2X68lyzJVTjd0WiJu2XMem++7+tAxSd52+hguTe3GYtjq6V3XPyqDtbA/WLyAtqRg0rHhLceo3avCsk0kjqd7uoEL0FJkaC/9Hh/gS9ixS0dTCaDKHVidNhoTNN2gQP/FedAmly/t2IWm2YK2xswqDbj3antzz5oToD/915/i5smbcdo8vfaDQGiC37YfEyeW4KtcMu2g1HbCrp9Dx5Fw3ZCw04ZSb0Jse6CsLH1qgZFfK0znn+hpznzKHGpJRzus4YJ/AX/78G94ofUC7r777pwMxAhdE6pyAK8u78CJJZ+BtcKiIw8Wea0DTx34ZCH5oHYwM1y0TjhnziXbaWgB+4cP/RCPPfYYtm/fjpMnT+Kmm24aDrDYhdpoQdAbaMtNSB4Da6UhRx4sqnB3SCTPNbtvtu9iMoU/Wg5Kt9p0h8DTp09j3759ePrpp/H4448PB1fylOtC5jTUGVifseFYgJXClXou+jcN6Gk2nj7JG1Gi7TG0Hkiz7OlGP/ru6OGjq46rnnjiCSwuLibe66677hocMAZWT5uNDVgpXGfbZ5OtybQNZq1EE6G0NXmXtGvNwbrv+4n3uu222wYPjwys9QFW2goKjbQ4Tdth6CAFeSpK5J3oQMUwhynS8PjMM89AVdVs3ouBtb7Aytbrw+WiMZfnednCIwOLgTUIZml43LFjB5577rnhnx4Huek6yztWY6yqbb+wsJBMTwwUHquu5Ijej4GVoWMoPJ4/fz7xXkM9PWa4x3rLwsDK2KMXLlxIvBeFR5qe2LRpU8YrN2Y2BtaA/U7hkaYnnn322exPjwPeYz1kZ2AN2YtpeCTvdeeddw5Zyvq9jIGVo28pPJL3ok2NLDxeb0gGVg6w0kvT8HjixIlkHJY1lauaE8GRangwsvD/noKqt+kzsLJSkCEfzdi/8cYbifdaKzxWoppDmxJ5FT54NH06YZShAQVmYWAVaEwqKg2PMzMzyfTEyqfHqlRzAoOH6OqwJnXoNQeBSWcjq0sMrJJsferUqSQsdofHylRzYg8aLyG0QtiTOvhGhFZglyKD0Mt8DKySwEqLpfD45ptvYn5+Hr/+z19/sukwj2pOP72vyJXBy4BNME340Pg6AiNAu8IDkQysksGi4t9++2189wffxee++DkIO4TcqjlrSw504Eg81FobYetq+KOwKDgagjVOnRdtBgZW0RZdpbw0BL73/nv4yZM/6bv7tVeVxkk1h4FVAVgbUTWHgVUBWGUcvCVV6EP/cuiztQ9NCNsMiIshrPSIeaK3oUNIlXQqaDMDqwIjlyEV0Fv6MoQlbENT/FTIhWSXOF2AF5jocei8cCswsAo36WcLLEPchO7yyr+9smrt6TQ3geQmcgcd2CQbIHoIDKGyuSwG1joEi06oU+jj3RAWR2HQgFiiTuxqJmRgVQBWGaGQDo78/OjPe9T+qpfSeBeeqIM3JPip4k8F7aVbMLAqMHSlg/dr7YkcCZxWg1Jz0G5UL7/EwKoArBuhmoNEbupBvPrRDhxf8qFVLFrCwKoArFQi4P3o/VwTpCmgdBi3r2oOIrQbNdwfGljytZ46r2U1n4FVlmW7yn3rrbfwvX/+XrKkMyPM5FLNIS2KbCrSNI8loKX48G6AxhIDq2SwaIcDgWWaJn71H78qRDWnlxbF1aaQxJILj6TRjRhm0L4hYrwMrJLAos1+BBXtyaLty5SKVs1Zverx1RB4dhIPPe/CVioeXF2rFAOrYLDIOxFQd9xxRwLVytSt90XfFaGaU3ATCimOgVWIGa8WkoY9AorA6pUIrqJVcwpsRiFFMbAKMONqYS9LsWWo5mS5bxV5GFg5rExhj8ZPdHBitbCXo+ixv5SBNWQXpmGPvNXtt98+ZCnr9zIG1oB9O2zYG/A2Y5+dgZWxC1nYy2goNt2Q3VA0jqIDESzsZbcZ81hr2CoNe/T56KOPZrcqy8m2zazGAAt7+X8ZzGOtsCELe/mhohLGEqwyVFpY2CsGqLSUsQKrDJUWFvaKBWrswCpDpYWFvXKgKiYUxh5U/huwhd8idBqYRARX4bHTldd8Le8gTSpapYWWX0is47qnveTdi02I6aFOejlAbSdcOT2fF8NTOEixDTqnV6Uk0CC2GpW8hYTCyFXA72yj8XoAAzoE+nsxgNnrZc8DtL7bU9HJlDwqLY9855FkbY8ktS3LWlGLECbPo6UG8DUOsa+Bn5nH8q3HsRRo4GISL6vDN0O0e70SdoB2rfeshYBF71Juyzzu90TcF59FIC8WJvSVvgiT9nnPH5nP/K7CtOPonYWzh2aTF2Fu+usmvPjLF3us7cXwdR6iZ6DjyogsAWKrhokghhG6kCMTAu9Ap7+r1l0cQwoLAote4+ugwT+IsxO78XrQKkTkqzsEkqeily8Nk0il5cfHfowv3/xlLBxf6Pk2sNhTwEkx7I6FqMHDlC3wTRVRK4QZ1sGbCnxfrfxgwjBtvtHXFAZW7OsQZo7hEm7Fkxf8nm+mH6TBlau0RG00OBWcY6Gj6BDaLgSdDn46MPwG9Hr15/MGsdco5S0GrDiAIU7D5M/AgIo9gY6Lng4+5wi3jIOea59wieCQzgEnAe4kWoEFzhbBGRzEyIPQDmBWpaoxSpQMUZdCwCLh1OlmDWcCBzJsSNzDiIyL8LR8Ur1lHE2nPeZzh+d6mooENW7Zcx6b7zuHTlvCJB1Nnz6GS1O7sUhKxDl/LEP00Vhekh8sUjThNUyYAdxr59dCSwSvAWbg5Xq7exkqLfRO6TMnz/TurNAEv20/Jk4swaf2xC6U2k7Y9XPoOBIm6crYh6UoaLodABOoSU3YlpLbQ48lQT0qnR+sEq1RBlj0dGmfsnPVOtB51IMmfEdGLQ7RkkSYkps8VbJ01QIjDdaNCIVZwOi4DnxOgsRRXIzhazwakY3gmphsljLWe56RBqv6wfvg3R0HFqS6CcHxC5kQHrwGo3nFSIN1Q1RaBuinyDchSyYmDRcthWPLPF22G2mwuo+k55kgHUylJRtZoa1A0kI0bAdGPRnSszQuYFE90yUdepoznzKHWtLRDmsglZY8cHZTE7UVCGqEpmtDScZZLK20wEh7LKpst9YBKQUf1A5mhovWCefMuU9eM9JbWnEQMAIY/DQOXLr+mqmHXkfIdj18YpSRByuFa6+2F1f+cgXkuWb3zfZdN6Twt/DCQuKpsgmVDQIXy9vPAmMB1krPRf9eryot/TpsXL4fG7BSuNa7Ssu4gNOvnmMFVtqY9azS0q/DxuX7sQRrXIy7kevJwNrIvV9i2xlYJRp3IxfNwNrIvV9i2xlYJRp3IxfNwNrIvV9i2xlYJRp3IxfNwNrIvV9i2xlYJRp3Ixf9d0NIelzdt4X5AAAAAElFTkSuQmCC".encode('utf-8')), embed=True)
# + [markdown] colab_type="text" id="fBQq_R8B8rRf"
# Here is the TensorFlow code for this simple neural network and the results of running this code:
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 7741, "status": "ok", "timestamp": 1474671834967, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="Dy8pFefa_Ho_" outputId="318456b0-f9de-4717-d9c7-956b5d390d05"
#@test {"output": "ignore"}
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# Set up the data with a noisy linear relationship between X and Y.
num_examples = 50
X = np.array([np.linspace(-2, 4, num_examples), np.linspace(-6, 6, num_examples)])
X += np.random.randn(2, num_examples)
x, y = X
x_with_bias = np.array([(1., a) for a in x]).astype(np.float32)
losses = []
training_steps = 50
learning_rate = 0.002
with tf.Session() as sess:
# Set up all the tensors, variables, and operations.
input = tf.constant(x_with_bias)
target = tf.constant(np.transpose([y]).astype(np.float32))
weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1))
tf.global_variables_initializer().run()
yhat = tf.matmul(input, weights)
yerror = tf.sub(yhat, target)
loss = tf.nn.l2_loss(yerror)
update_weights = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
for _ in range(training_steps):
# Repeatedly run the operations, updating the TensorFlow variable.
update_weights.run()
losses.append(loss.eval())
# Training is done, get the final values for the graphs
betas = weights.eval()
yhat = yhat.eval()
# Show the fit and the loss over time.
fig, (ax1, ax2) = plt.subplots(1, 2)
plt.subplots_adjust(wspace=.3)
fig.set_size_inches(10, 4)
ax1.scatter(x, y, alpha=.7)
ax1.scatter(x, np.transpose(yhat)[0], c="g", alpha=.6)
line_x_range = (-4, 6)
ax1.plot(line_x_range, [betas[0] + a * betas[1] for a in line_x_range], "g", alpha=0.6)
ax2.plot(range(0, training_steps), losses)
ax2.set_ylabel("Loss")
ax2.set_xlabel("Training steps")
plt.show()
# + [markdown] colab_type="text" id="vNtkU8h18rOv"
# In the remainder of this notebook, we'll go through this example in more detail.
# + [markdown] colab_type="text" id="r6rsv-q5gnn-"
# ## From the beginning
#
# Let's walk through exactly what this is doing from the beginning. We'll start with what the data looks like, then we'll look at this neural network, what is executed when, what gradient descent is doing, and how it all works together.
# + [markdown] colab_type="text" id="UgtkJKqAjuDj"
# ## The data
#
# This is a toy data set here. We have 50 (x,y) data points. At first, the data is perfectly linear.
# + cellView="form" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 271, "status": "ok", "timestamp": 1474671835304, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="-uoBWol3klhA" outputId="cc31ce5d-9b65-4ef6-8475-643be268569a"
#@test {"output": "ignore"}
num_examples = 50
X = np.array([np.linspace(-2, 4, num_examples), np.linspace(-6, 6, num_examples)])
plt.figure(figsize=(4,4))
plt.scatter(X[0], X[1])
plt.show()
# + [markdown] colab_type="text" id="AId3xHBNlcnk"
# Then we perturb it with noise:
# + cellView="form" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 375, "status": "ok", "timestamp": 1474671835705, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="fXcGNNtjlX63" outputId="455c3e70-a724-4e0a-d08e-9bf6bd1aa7e9"
#@test {"output": "ignore"}
X += np.random.randn(2, num_examples)
plt.figure(figsize=(4,4))
plt.scatter(X[0], X[1])
plt.show()
# + [markdown] colab_type="text" id="3dc1cl5imNLM"
# ## What we want to do
#
# What we're trying to do is calculate the green line below:
# + cellView="form" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 1150, "status": "ok", "timestamp": 1474671836784, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="P0m-3Mf8sQaA" outputId="32a8a45d-ba64-4286-acf7-0883d9184693"
#@test {"output": "ignore"}
weights = np.polyfit(X[0], X[1], 1)
plt.figure(figsize=(4,4))
plt.scatter(X[0], X[1])
line_x_range = (-3, 5)
plt.plot(line_x_range, [weights[1] + a * weights[0] for a in line_x_range], "g", alpha=0.8)
plt.show()
# + [markdown] colab_type="text" id="VYUr2uPA9ah8"
# Remember that our simple network looks like this:
# + cellView="form" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 898, "status": "ok", "timestamp": 1474671837740, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="gt8UuSQA9frA" outputId="6eb7616b-25a9-4845-aeab-7472201c60f6"
from IPython.display import Image
import base64
Image(data=base64.decodestring("<KEY>".encode('utf-8')), embed=True)
# + [markdown] colab_type="text" id="Ft95NDUZy4Rr"
# That's equivalent to the function $\hat{y} = w_2 x + w_1$. What we're trying to do is find the "best" weights $w_1$ and $w_2$. That will give us that green regression line above.
#
# What are the best weights? They're the weights that minimize the difference between our estimate $\hat{y}$ and the actual y. Specifically, we want to minimize the sum of the squared errors, so minimize $\sum{(\hat{y} - y)^2}$, which is known as the *L2 loss*. So, the best weights are the weights that minimize the L2 loss.
# + [markdown] colab_type="text" id="RHDGz_14vGNg"
# ## Gradient descent
#
# What gradient descent does is start with random weights for $\hat{y} = w_2 x + w_1$ and gradually moves those weights toward better values.
#
# It does that by following the downward slope of the error curves. Imagine that the possible errors we could get with different weights as a landscape. From whatever weights we have, moving in some directions will increase the error, like going uphill, and some directions will decrease the error, like going downhill. We want to roll downhill, always moving the weights toward lower error.
#
# How does gradient descent know which way is downhill? It follows the partial derivatives of the L2 loss. The partial derivative is like a velocity, saying which way the error will change if we change the weight. We want to move in the direction of lower error. The partial derivative points the way.
#
# So, what gradient descent does is start with random weights and gradually walk those weights toward lower error, using the partial derivatives to know which direction to go.
# + [markdown] colab_type="text" id="W7SgnPAWBX2M"
# ## The code again
#
# Let's go back to the code now, walking through it with many more comments in the code this time:
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 548, "status": "ok", "timestamp": 1474671838303, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="4qtXAPGmBWUW" outputId="841884bc-9a23-4627-cdec-8e7d4261a3c0"
#@test {"output": "ignore"}
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# Set up the data with a noisy linear relationship between X and Y.
num_examples = 50
X = np.array([np.linspace(-2, 4, num_examples), np.linspace(-6, 6, num_examples)])
# Add random noise (gaussian, mean 0, stdev 1)
X += np.random.randn(2, num_examples)
# Split into x and y
x, y = X
# Add the bias node which always has a value of 1
x_with_bias = np.array([(1., a) for a in x]).astype(np.float32)
# Keep track of the loss at each iteration so we can chart it later
losses = []
# How many iterations to run our training
training_steps = 50
# The learning rate. Also known has the step size. This changes how far
# we move down the gradient toward lower error at each step. Too large
# jumps risk inaccuracy, too small slow the learning.
learning_rate = 0.002
# In TensorFlow, we need to run everything in the context of a session.
with tf.Session() as sess:
# Set up all the tensors.
# Our input layer is the x value and the bias node.
input = tf.constant(x_with_bias)
# Our target is the y values. They need to be massaged to the right shape.
target = tf.constant(np.transpose([y]).astype(np.float32))
# Weights are a variable. They change every time through the loop.
# Weights are initialized to random values (gaussian, mean 0, stdev 0.1)
weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1))
# Initialize all the variables defined above.
tf.global_variables_initializer().run()
# Set up all operations that will run in the loop.
# For all x values, generate our estimate on all y given our current
# weights. So, this is computing y = w2 * x + w1 * bias
yhat = tf.matmul(input, weights)
# Compute the error, which is just the difference between our
# estimate of y and what y actually is.
yerror = tf.sub(yhat, target)
# We are going to minimize the L2 loss. The L2 loss is the sum of the
# squared error for all our estimates of y. This penalizes large errors
# a lot, but small errors only a little.
loss = tf.nn.l2_loss(yerror)
# Perform gradient descent.
# This essentially just updates weights, like weights += grads * learning_rate
# using the partial derivative of the loss with respect to the
# weights. It's the direction we want to go to move toward lower error.
update_weights = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# At this point, we've defined all our tensors and run our initialization
# operations. We've also set up the operations that will repeatedly be run
# inside the training loop. All the training loop is going to do is
# repeatedly call run, inducing the gradient descent operation, which has the effect of
# repeatedly changing weights by a small amount in the direction (the
# partial derivative or gradient) that will reduce the error (the L2 loss).
for _ in range(training_steps):
# Repeatedly run the operations, updating the TensorFlow variable.
sess.run(update_weights)
# Here, we're keeping a history of the losses to plot later
# so we can see the change in loss as training progresses.
losses.append(loss.eval())
# Training is done, get the final values for the charts
betas = weights.eval()
yhat = yhat.eval()
# Show the results.
fig, (ax1, ax2) = plt.subplots(1, 2)
plt.subplots_adjust(wspace=.3)
fig.set_size_inches(10, 4)
ax1.scatter(x, y, alpha=.7)
ax1.scatter(x, np.transpose(yhat)[0], c="g", alpha=.6)
line_x_range = (-4, 6)
ax1.plot(line_x_range, [betas[0] + a * betas[1] for a in line_x_range], "g", alpha=0.6)
ax2.plot(range(0, training_steps), losses)
ax2.set_ylabel("Loss")
ax2.set_xlabel("Training steps")
plt.show()
# + [markdown] colab_type="text" id="lSWT9YsLP1de"
# This version of the code has a lot more comments at each step. Read through the code and the comments.
#
# The core piece is the loop, which contains a single `run` call. `run` executes the operations necessary for the `GradientDescentOptimizer` operation. That includes several other operations, all of which are also executed each time through the loop. The `GradientDescentOptimizer` execution has a side effect of assigning to weights, so the variable weights changes each time in the loop.
#
# The result is that, in each iteration of the loop, the code processes the entire input data set, generates all the estimates $\hat{y}$ for each $x$ given the current weights $w_i$, finds all the errors and L2 losses $(\hat{y} - y)^2$, and then changes the weights $w_i$ by a small amount in the direction of that will reduce the L2 loss.
#
# After many iterations of the loop, the amount we are changing the weights gets smaller and smaller, and the loss gets smaller and smaller, as we narrow in on near optimal values for the weights. By the end of the loop, we should be near the lowest possible values for the L2 loss, and near the best possible weights we could have.
# + [markdown] colab_type="text" id="dFOk7ERATLk2"
# ## The details
#
# This code works, but there are still a few black boxes that are worth diving into here. `l2_loss`? `GradientDescentOptimizer`? What exactly are those doing?
#
# One way to understand exactly what those are doing is to do the same thing without using those functions. Here is equivalent code that calculates the gradients (derivatives), L2 loss (sum squared error), and `GradientDescentOptimizer` from scratch without using those functions.
# + cellView="both" colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": [{"item_id": 1}]} colab_type="code" executionInfo={"elapsed": 4219, "status": "ok", "timestamp": 1474671842604, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="_geHN4sPTeRk" outputId="3ee8e5e5-0db0-4e6b-ef7e-f9e530fa7bee"
#@test {"output": "ignore"}
# Use the same input data and parameters as the examples above.
# We're going to build up a list of the errors over time as we train to display later.
losses = []
with tf.Session() as sess:
# Set up all the tensors.
# The input is the x values with the bias appended on to each x.
input = tf.constant(x_with_bias)
# We're trying to find the best fit for the target y values.
target = tf.constant(np.transpose([y]).astype(np.float32))
# Let's set up the weights randomly
weights = tf.Variable(tf.random_normal([2, 1], 0, 0.1))
tf.global_variables_initializer().run()
# learning_rate is the step size, so how much we jump from the current spot
learning_rate = 0.002
# The operations in the operation graph.
# Compute the predicted y values given our current weights
yhat = tf.matmul(input, weights)
# How much does this differ from the actual y?
yerror = tf.sub(yhat, target)
# Change the weights by subtracting derivative with respect to that weight
loss = 0.5 * tf.reduce_sum(tf.mul(yerror, yerror))
gradient = tf.reduce_sum(tf.transpose(tf.mul(input, yerror)), 1, keep_dims=True)
update_weights = tf.assign_sub(weights, learning_rate * gradient)
# Repeatedly run the operation graph over the training data and weights.
for _ in range(training_steps):
sess.run(update_weights)
# Here, we're keeping a history of the losses to plot later
# so we can see the change in loss as training progresses.
losses.append(loss.eval())
# Training is done, compute final values for the graph.
betas = weights.eval()
yhat = yhat.eval()
# Show the results.
fig, (ax1, ax2) = plt.subplots(1, 2)
plt.subplots_adjust(wspace=.3)
fig.set_size_inches(10, 4)
ax1.scatter(x, y, alpha=.7)
ax1.scatter(x, np.transpose(yhat)[0], c="g", alpha=.6)
line_x_range = (-4, 6)
ax1.plot(line_x_range, [betas[0] + a * betas[1] for a in line_x_range], "g", alpha=0.6)
ax2.plot(range(0, training_steps), losses)
ax2.set_ylabel("Loss")
ax2.set_xlabel("Training steps")
plt.show()
# + [markdown] colab_type="text" id="TzIETgHwTexL"
# This code looks very similar to the code above, but without using `l2_loss` or `GradientDescentOptimizer`. Let's look at exactly what it is doing instead.
#
# This code is the key difference:
#
# >`loss = 0.5 * tf.reduce_sum(tf.mul(yerror, yerror))`
#
# >`gradient = tf.reduce_sum(tf.transpose(tf.mul(input, yerror)), 1, keep_dims=True)`
#
# >`update_weights = tf.assign_sub(weights, learning_rate * gradient)`
#
# The first line calculates the L2 loss manually. It's the same as `l2_loss(yerror)`, which is half of the sum of the squared error, so $\frac{1}{2} \sum (\hat{y} - y)^2$. With this code, you can see exactly what the `l2_loss` operation does. It's the total of all the squared differences between the target and our estimates. And minimizing the L2 loss will minimize how much our estimates of $y$ differ from the true values of $y$.
#
# The second line calculates $\begin{bmatrix}\sum{(\hat{y} - y)*1} \\ \sum{(\hat{y} - y)*x_i}\end{bmatrix}$. What is that? It's the partial derivatives of the L2 loss with respect to $w_1$ and $w_2$, the same thing as what `gradients(loss, weights)` does in the earlier code. Not sure about that? Let's look at it in more detail. The gradient calculation is going to get the partial derivatives of loss with respect to each of the weights so we can change those weights in the direction that will reduce the loss. L2 loss is $\frac{1}{2} \sum (\hat{y} - y)^2$, where $\hat{y} = w_2 x + w_1$. So, using the chain rule and substituting in for $\hat{y}$ in the derivative, $\frac{\partial}{\partial w_2} = \sum{(\hat{y} - y)\, *x_i}$ and $\frac{\partial}{\partial w_1} = \sum{(\hat{y} - y)\, *1}$. `GradientDescentOptimizer` does these calculations automatically for you based on the graph structure.
#
# The third line is equivalent to `weights -= learning_rate * gradient`, so it subtracts a constant the gradient after scaling by the learning rate (to avoid jumping too far each time, which risks moving in the wrong direction). It's also the same thing that `GradientDescentOptimizer(learning_rate).minimize(loss)` does in the earlier code. Gradient descent updates its first parameter based on the values in the second after scaling by the third, so it's equivalent to the `assign_sub(weights, learning_rate * gradient)`.
#
# Hopefully, this other code gives you a better understanding of what the operations we used previously are actually doing. In practice, you'll want to use those high level operators most of the time rather than calculating things yourself. For this toy example and simple network, it's not too bad to compute and apply the gradients yourself from scratch, but things get more complicated with larger networks.
# + cellView colab={"autoexec": {"startup": false, "wait_interval": 0}, "output_extras": []} colab_type="code" executionInfo={"elapsed": 164, "status": "ok", "timestamp": 1474671842705, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="ty5-b_nYSYWR" outputId="311b7bff-5c8b-43ee-da0f-439a879636d1"
|
tensorflow/tools/docker/notebooks/2_getting_started.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 作业2 空间数据库创建和数据查询
#
# + active=""
# 姓名:郑昱笙
# + active=""
# 学号:3180102760
# -
# **作业目的:**了解OGC SFA标准,了解开源对象关系数据库系统PostgreSQL及其空间扩展PostGIS,熟悉PostGIS空间函数帮助文档查询方法,熟悉PostgreSQL空间数据库建库和数据导入,掌握各种几何关系判断、空间分析及相关SQL操作,熟悉在QGIS和在线地图上展示与分析查询结果。
#
# **注意事项:**
# * SQL语句的错误输出为乱码时,修改SET client_encoding = 'GBK';或SET client_encoding = 'UTF-8';,重新连接数据库
# * Jupyter Notebook对SQL语句的错误提示较弱,可以先在pgAdmin 4上执行,查看详细的错误信息
# * 作业2总分60分,作业考察的题目后面标了具体分数,可以相互讨论思路,作业抄袭或雷同都要扣分
# * 作业2\_学号\_姓名.ipynb替换其中的学号和姓名,包含执行结果,学号.jpg、作业2\_学号\_姓名.ipynb和jsonData目录压缩为作业2\_学号\_姓名.rar/zip,**不要包含数据文件**,提交到学在浙大,作业2截止日期**2020.3.29**
# ### 1. OGC Simple Feature Access标准
#
# <a href="http://www.opengeospatial.org/docs/is" target="_blank">Open Geospatial Consortium</a>的Simple Feature Access标准包含两个部分Part 1 <a href="http://portal.opengeospatial.org/files/?artifact_id=25355" target="_blank">Common architecture</a>和Part 2 <a href="http://portal.opengeospatial.org/files/?artifact_id=25354" target="_blank">SQL option</a>两部分,给出了地理空间几何类型及其SQL实现规范,建议阅读参考。
#
# #### Part I Common architecture的Introduction介绍如下:
#
# OpenGIS®简单要素访问规范(SFA)的本文部分,也叫做ISO 19125,描述了简单地理要素的常用架构。简单地理要素对象模型是计算平台无关,并使用UML表示法。基类Geometry包含子类Point,Curve,Surface和GeometryCollection。每个几何对象和一个空间参考系(Spatial Reference System)关联,空间参考系描述了几何对象的坐标空间。
#
# 扩展几何模型包括特定的0,1和2维集合类,即MultiPoint、MultiLineString和MultiPolygon,他们分别用于对点、线和面集合的建模。MultiCurve和MultiSurface作为抽象超类,用于产生处理曲线和面集合的接口。
#
# #### Part 2 SQL option的Introduction介绍如下:
# OpenGIS®简单要素访问规范(SFA)的第二部分,也被称作ISO 19125,定义了标准的结构化查询语句(SQL)规范,支持通过SQL调用接口(SQL/CLI) (ISO/IEC 9075-3:2003)的要素集合的存储、检索、查询和更新。一个要素同时具有空间和非空间属性。空间属性是具有几何意义(geometry valued),同时简单要素是基于2D或更少维度的几何实体(点、曲线和面),在二维中顶点之间可以线性插值,三维中顶点可以平面插值。这一标准是基于定义在Part 1中的常用架构组件。
#
# 在SQL实现中,单个类型的要素集合存储在一张要素表的具有几何意义的属性(列)。每个要素通常表示为这一要素表中的一行,通过标准SQL技术逻辑连接这一要素表和其他表。要素的非空间属性(列)的数据类型来自于SQL数据类型,包括SQL3的用户自定义类型(UDT)。要素的空间属性(列)的数据类型是基于本标准的SQL的几何数据类型。要素表模式可以通过两种SQL方式实现,基于SQL预定义数据类型的经典SQL关系模型,和基于附加几何类型的SQL。无论哪种实现,几何表示有一组SQL方法函数,支持几何行为和查询。
#
# 在基于预定义数据类型的实现中,具有几何意义的列通过几何表中一个几何ID实现。几何数据存储在几何表中的一行或多行,这些行具有相同的几何ID作为他们的主键。几何表可以使用标准的SQL数值类型或SQL二进制类型实现;这两者的模式在这个标准中描述。
#
# 术语“带几何类型的SQL”常用来指拓展了一套几何类型的SQL实现。在这种实现中,一个具有几何意义的列通过几何类型的列实现。拓展SQL实现类型系统的机制是通过用户自定义的类型来完成的。基于用户自定义类型的商用的SQL实现从1997年中期开始就已经存在,对于UDT定义、ISO标准也已经存在。是作为SQL类型来自这套几何类型的列来实现的。商业的支持用户定义类型支持的SQL实现从1997年中期开始就已经存在,。这个标准不是指特定的用户定义类型机制,但需要支持UDTS定义的接口标准。这些接口描述了ISO/IEC 13249-3中的SQL3 UDTs。
#
# <img src="polygon.svg">
# 1.1 请给出图(a)中灰色多边形的Well-Known Text (WKT) Representation。(1分)
# + active=""
# POLYGON((2 2, 11 2, 11 12, 2 12, 2 2), (4 4, 7 4, 7 9, 4 9, 4 4))
# -
# 1.2 基于6.1.11.1的Polygon的assertions (the rules that define valid Polygons),请分析图(b)中几何对象不能用a polygon表示的原因。(1分)
# + active=""
# assertions e: the interoir of every polygon is a connected point set
# 该图形的内部不是connected point set;
# assertions c: the Rings in the boundary of a Polygon may intersect at a Point but only as a tangent
# 这里有两个边界共用了两个点
# -
# 1.3 请给出图(c)中绿色多边形(A)和蓝色线(B)的Dimensionally Extended Nine-Intersection Model (DE-9IM)。(1分)
# + active=""
# I(B) B(B) E(B)
# I(A) 1 0 2
# B(A) 0 -1 1
# E(A) 1 0 2
# -
# 1.4 当a.Relate(b,“T\*T\*\*\*T\*\*”)返回True时,请给出几何对象a和b所对应的空间关系。(1分)
# + active=""
# Overlaps
# -
# 1.5 请给出空间关系Contains的九交矩阵(9IM)的字符串表示。(1分)
# + active=""
# T*****FF*
# -
# ### 2. PostGIS实现了OGC SFA标准,使用相应空间类型和函数时,建议查询<a href="http://postgis.net/docs/reference.html" target="_blank">帮助文档</a>。
# 2.1 请翻译ST_MakePoint函数在PostGIS帮助文档中的Name和Description小节内容。
# + active=""
# Name
# ST_MakePoint — Creates a 2D,3DZ or 4D point geometry.
# ST_MakePoint – 创建一个2D,3DZ或4D点几何。
#
# Description
# Creates a 2D,3DZ or 4D point geometry (geometry with measure). ST_MakePoint while not being OGC compliant is generally faster and more precise than ST_GeomFromText and ST_PointFromText. It is also easier to use if you have raw coordinates rather than WKT.
# 创建一个2D,3DZ或4D点几何(几何和度量)。ST_MakePoint虽然并不符合OGC规范,但通常比ST_GeomFromText和ST_PointFromText更快和精确。如果你使用原始坐标,而不是WKT,ST_MakePoint也更容易使用。
#
# Note x is longitude and y is latitude
# 注意x是经度,y是纬度。
#
# Use ST_MakePointM if you need to make a point with x,y,m.
# 如果你需要一个具有x,y和m值的点,使用ST_MakePointM
#
# This function supports 3d and will not drop the z-index.
# 本函数支持三维,不会丢弃z-index.
# -
# 2.2 ST_Distance函数说明:
# * For geometry type Returns the 2D Cartesian distance between two geometries in projected units (based on spatial ref).
# * For geography type defaults to return minimum geodesic distance between two geographies in meters.
#
# 在空间参考系4326下,使用ST_Distance(geometry(Point, 4326), geometry(LineString, 4326))计算距离,返回的是什么距离,单位是什么?(1分)
# + active=""
# 返回的是最小的二维笛卡尔距离,空间计算单位为投影单位(度);
# -
# 2.3 基于帮助文档,请比较~=(操作符)、=(操作符)、ST_Equals和ST_OrderingEquals四个函数的异同。(1分)
# + active=""
# boolean =( geometry A , geometry B ):仅将在所有方面完全相同,坐标相同,顺序相同的几何视为相等。
# boolean ~=( geometry A , geometry B );将边界框相同的几何要素视为相等。(PostGIS 1.5前测试实际相等性)
# boolean ST_Equals(geometry A, geometry B);几何在空间上相等则返回true,不考虑点的顺序。即 ST_Within(A,B)= true 且 ST_Within(B,A)= true 。
# boolean ST_OrderingEquals(geometry A, geometry B);如果几何相等且坐标顺序相同,则返回TRUE。
# -
# 2.4 ST_Distance(Point, Polygon) <= 10和ST_DWithin(Point, Polygon, 10)功能上等价,而效率差异较大。基于帮助文档,请分析效率差异的原因。(1分)
# + active=""
# ST_DWithin uses a more short-circuit distance function which should make it more efficient for larger buffer regions.
# ST_DWithin使用更多的短路距离功能,对于较大的缓冲区,它应当更有效。
# -
# 2.5 基于帮助文档,请比较ST_DistanceSphere(geometry pointlonlatA, geometry pointlonlatB)、ST_Distance(geometry g1, geometry g2)与ST_DistanceSpheroid(geometry pointlonlatA, geometry pointlonlatB, spheroid measurement_spheroid)三个函数的异同。(1分)
# + active=""
# ST_DistanceSphere(geometry pointlonlatA, geometry pointlonlatB):
# Returns minimum distance in meters between two lon/lat points. Uses a spherical earth and radius derived from the spheroid defined by the SRID. Faster than ST_DistanceSpheroid, but less accurate. PostGIS Versions prior to 1.5 only implemented for points.
# ST_DistanceSpheroid(geometry pointlonlatA, geometry pointlonlatB, spheroid measurement_spheroid):
# Returns minimum distance in meters between two lon/lat geometries given a particular spheroid.
# This function does not look at the SRID of the geometry. It assumes the geometry coordinates are based on the provided spheroid.
# ST_Distance(geometry g1, geometry g2):
# For geometry types returns the minimum 2D Cartesian (planar) distance between two geometries, in projected units (spatial ref units).
# 相同点:都是以几何类型作为参数,返回两个几何之间的距离;
# 不同点:
# ST_DistanceSphere 以几何所定义的SRID椭球体进行计算,返回单位为米;比ST_DistanceSpheroid快;
# ST_DistanceSpheroid 需要另外提供椭球体信息进行计算,返回单位为米;
# ST_Distance 返回二维平面上的笛卡尔距离,单位是投影单位。
# -
# 2.6 哪个函数可以将MultiXXX转换XXX,如MultiPolygon转换获得多个Polygon?(1分)
# + active=""
# select ST_Dump(ST_GeomFromText('MULTILINESTRING((0 0,1 1,1 2),(2 3,3 2,5 4))'))
# -
# ### 3. 美国湖泊、城市、高速公路及其交通事故的空间数据库创建和查询
#
# 通过pgAdmin 4在PostgreSQL数据库中创建hw2数据库,添加postgis扩展(create extension postgis),并连接该数据库。
# +
# %load_ext sql
from geom_display import display
from geom_display import choroplethMap
from geom_display import heatMap
# display([result1, result2, ...], divId, zoom)对数组中所有的result数据进行几何展示,
# result的关系类型至少包含(gid,geom,name),zoom为放缩比例, name是在地图上描述geom的名词
# choroplethMap(result, divId, zoom)对数组中所有的result数据进行主题地图展示,
# result的关系类型至少包括(gid,geom, name, value),zoom为放缩比例, name是在地图上描述geom的名词, value是用于映射颜色的数值
# heatMap(result, divId, zoom)对数组中所有的result数据进行热力图展示,
# result的关系类型至少包括(gid,geom,name),zoom为放缩比例,name是在地图上描述geom的名词, 也可以给出value值,用于颜色映射,缺省都为1
# + magic_args="postgresql://postgres:postgres@localhost/hw2" language="sql"
#
# SET statement_timeout = 0;
# SET lock_timeout = 0;
# SET client_encoding = 'UTF-8';
# SET standard_conforming_strings = on;
# SET check_function_bodies = false;
# SET client_min_messages = warning;
# -
# 3.1 通过PostGIS的shapefile导入工具,在PostgreSQL中导入美国accidents、highways和lakes的shapefile数据。(1分)
#
# 美国高速公路交通事故数据来源于[美国交通局](https://www.transportation.gov/fastlane/2015-traffic-fatalities-data-has-just-been-released-call-action-download-and-analyze) [白宫新闻备份](https://obamawhitehouse.archives.gov/blog/2016/08/29/2015-traffic-fatalities-data-has-just-been-released-call-action-download-and-analyze),STATE为美国56个州的ID,ST_CASE由州ID和交通事故编号组成,交通事故发生在county和city,时间为day, month, year, day_week, hour和minute,地点在latitude和longitud,是否酒驾drunk_dr,大于0为酒驾。地点latitude和longitud存在错误数据情况,如大于1000,忽略这类错误数据。
#
# 注意:shapefile文件不能放在包含中文的路径下,usaccidents、ushighways和uslakes的空间参考系需更改为4326。
# highway_num = %sql select count(*) from ushighways;
# lake_num = %sql select count(*) from uslakes;
# accident_num = %sql select count(*) from usaccidents;
# highway_srid = %sql select ST_SRID(geom) from ushighways limit 1;
# lake_srid = %sql select ST_SRID(geom) from uslakes limit 1;
# accident_srid= %sql select ST_SRID(geom) from usaccidents limit 1;
print('the number of highways is ' + str(highway_num[0][0]))
print('the number of lakes is ' + str(lake_num[0][0]))
print('the number of accidents is ' + str(accident_num[0][0]))
print('the SRID of ushighways is ' + str(highway_srid[0][0]))
print('the SRID of uslakes is ' + str(lake_srid[0][0]))
print('the SRID of usaccidents is ' + str(accident_srid[0][0]))
# 修改usaccidents, ushighways和uslakes得SRID为4326
# %sql select UpdateGeometrySRID('ushighways', 'geom', 4326);
# %sql select UpdateGeometrySRID('uslakes', 'geom', 4326);
# %sql select UpdateGeometrySRID('usaccidents', 'geom', 4326);
# 3.2 创建关系uscities(gid, name, state, latitude, longitude),gid的数据类型为integer,name和state的数据类型为varchar(100),latitude和longitude的数据类型为numeric。(2分)
# + magic_args="drop table if exists uscities;" language="sql"
# create table uscities(
# gid integer primary key,
# name varchar(100),
# state varchar(100),
# latitude numeric,
# longitude numeric
# );
# -
# 3.3 通过[copy语句](https://www.postgresql.org/docs/current/static/sql-copy.html)导入uscities数据,注意属性之间的分隔符。(1分)
# %sql copy uscities from 'D:\\works\\GIS\\hw2\\usdata\\uscity.txt' delimiter '#';
# %sql select * from uscities limit 3;
# 3.4 对关系uscities增加几何属性列geom,根据每个城市的latitude和longtide,更新geom属性,注意空间参考系需与ushighways和uslakes相同。(2分)
# + language="sql"
# select AddGeometryColumn('uscities', 'geom', 4326, 'POINT', 2);
# UPDATE uscities SET geom = ST_SetSRID(ST_Point(longitude,latitude),4326);
# -
# 3.5 在QGIS中展示City图层、Highway图层、Lake图层和Accident图层,截图保存为学号.jpg,与本文件同一目录,修改下面的highways.png为你的学号,Shift+Enter能正确展示QGIS截图。可能由于浏览器图片缓存原因,修改后不能立即显示新图片,重新打开jupyter notebook验证图片是否正确显示。(1分)
# <img src="3180102760.jpg">
# #### 3.6 构造以下GIS分析与SQL查询,注意空间函数对Geometry和GeometryCollection的有效性。
# 3.6.0 查询伊利湖(Erie)的边界,通过display函数在OpenStreetMap中展示该边界,display函数要求查询结果模式至少包含gid,name和geom属性。
# +
query = """
select gid, 'Lake Erie''s Boundary' as name, ST_Boundary(geom) as geom
from uslakes
where name like '%Erie%'
"""
# result = %sql $query
display([result], "map0", 6)
# -
# 3.6.1 查询苏必利尔湖(Superior)几何数据中点的数目(the number of points in the geometry of 'Superior')(2分)
# + language="sql"
# select gid, 'Lake Superior''s number of point' as name, ST_NPoints(geom) as number
# from uslakes
# where name like '%Superior%';
# -
# 3.6.2 查询高速公路全称(full_name)为’I 278’的凸包,通过display函数在OpenStreetMap中展示该凸包,display函数要求查询结果模式至少包含gid,name和geom属性(2分)
# +
query = """
select gid, name, ST_ConvexHull(geom) as geom
from ushighways
where full_name = 'I 278';
"""
# result1 = %sql $query
# result2 = %sql select gid, geom, full_name as name from ushighways where full_name = 'I 278'
display([result1, result2], "map1", 11)
# -
# 3.6.3 查询哪些湖中有岛,通过display函数在OpenStreetMap中展示这些湖,display函数要求查询结果模式至少包含gid,name和geom属性(2分)
# +
query = """
select gid, name, geom from uslakes where ST_NRings(geom) > 1;
"""
# result = %sql $query
display([result], "map2", 4)
# -
# 3.6.4 查询湖的面积属性是否准确(绝对误差小于1e-6),列出面积属性不准确的湖及其误差,查询结果模式为(gid,name,error)(2分)<br/>
# **数据清洗与验证**:数据输入时,可能存在错误或误差,此时需要通过数据清理Data Cleaning,对数据进行验证和纠错
# + language="sql"
# select gid, name, abs(ST_Area(geom) - shape_area) error from uslakes where abs(ST_Area(geom) - shape_area) >= 1e-6;
# -
# 3.6.5 查询最长的高速公路及其长度(单位为千米),通过display函数在OpenStreetMap中展示该高速公路,查询结果模式为(gid,name,geom,length),其中name为高速公路的full_name(2分)
# +
query = """
select gid, full_name as name, geom, ST_Length(geom::geography)/1000.0 length
from ushighways order by length desc limit 1;
"""
# result = %sql $query
print(result[0]['length'])
display([result], "map3", 4)
# -
# 3.6.6 查询与安大略湖(Ontario)的质心距离最近的城市,通过display函数在OpenStreetMap中展示该湖和城市,display函数要求查询结果模式至少包含gid,name和geom属性,其中城市的name为‘name in state’的格式(2分)
# +
query = """
select gid, name || ' in ' || state as name, uc.geom, ST_Distance(ST_Centroid(la.geom) , uc.geom) distance from uscities uc,
(select geom from uslakes where name like '%Ontario%') la order by distance limit 1;
"""
# result1 = %sql $query
# result2 = %sql select gid, name, geom from uslakes where name like '%Ontario%';
display([result1, result2], "map4", 6)
# -
# 3.6.7 查询距离ST_CASE = 10012交通事故最近的城市,查询返回距离最近的城市名'name in state',不能使用关键词limit(2分)
# + language="sql"
# select uc.name || ' in ' || uc.state as name from uscities uc, usaccidents ua where st_case = 10012
# and ST_Distance(uc.geom,ua.geom) = ( select min(ST_Distance(uc.geom,ua.geom)) from uscities uc, usaccidents ua where st_case = 10012);
# -
# 3.6.8 查询94号公路(gid=94)与哪些高速公路联通,不包括94号公路,求总长度(单位为千米),通过display函数在OpenStreetMap中展示这些高速公路,display函数要求查询结果模式至少包含gid,name和geom属性,其中name为高速公路的full_name(3分)
# +
# 查询联通的高速公路
query = """
select gid, full_name as name, geom from ushighways where
ST_Intersects((select geom from ushighways where gid = 94),geom) and gid != 94;
"""
# result1 = %sql $query
# result2 = %sql select gid,geom, full_name as name from ushighways where gid = 94
display([result1, result2], "map5", 5)
# 查询总长度
query2 = """
select sum(ST_Length(geom::geography)/1000.0) from ushighways where
ST_Intersects((select geom from ushighways where gid = 94),geom) and gid != 94 ;
"""
# %sql $query2
# -
# 3.6.9 查询与伊利湖(Erie)距离最近的高速公路,通过display函数在OpenStreetMap中展示该湖和高速公路,display函数要求查询结果模式至少包含gid,name和geom属性,其中高速公路的name为full_name(3分)
# +
query = """
select gid, full_name as name, geom
from ushighways order by
ST_Distance((select geom from uslakes where name like '%Erie%'),geom) limit 1;
"""
# result1 = %sql $query
# result2 = %sql select gid, name, geom from uslakes where name like '%Erie%'
display([result1, result2], "map6", 4)
# -
# 3.6.10 查询哪个城市最偏僻,即离高速公路的距离最远,通过display函数在OpenStreetMap中展示最偏僻的城市和与其最近的高速公路,display函数要求查询结果模式至少包含gid,name和geom属性,其中高速公路的name为full_name(3分)
# +
# 查询最偏僻的城市
query1 = """
select uc.gid, uc.name || ' in ' || uc.state as name, uc.geom, min(ST_Distance(uc.geom,hw.geom)) d
from uscities uc , ushighways hw group by uc.gid order by d desc limit 1;
"""
# result1 = %sql $query1
# 查询与最偏僻的城市最近的高速公路
query2 = """
select gid, full_name as name, geom from ushighways order by ST_Distance((select uc.geom
from uscities uc,ushighways hw group by uc.gid order by min(ST_Distance(uc.geom,hw.geom)) desc limit 1),geom) limit 1;
"""
# result2 = %sql $query2
display([result1, result2], "map7", 4)
# -
# 3.6.11 查询哪些高速公路穿越湖,列出高速公路及其在湖中的长度,按长度从长到短排列,通过display函数在OpenStreetMap中展示这些高速公路和湖,display函数要求查询结果模式至少包含gid,name和geom属性, 其中高速公路的hname为full_name(3分)
# +
# 查询穿越湖的公路
query1 = """
select distinct hw.gid, full_name as name, hw.geom from ushighways hw, uslakes la where ST_Crosses(la.geom,hw.geom);
"""
# result1 = %sql $query1
# 查询被公路穿越的湖
query2 = """
select distinct la.gid, la.name, la.geom from ushighways hw, uslakes la where ST_Crosses(la.geom,hw.geom);
"""
# result2 = %sql $query2
display([result1, result2], "map8", 4)
# 查询高速公路在湖中的长度(hgid, hname, lgid, lname, length)
query3 = """
select hw.gid hgid, full_name as hname, la.gid lgid, la.name lname,ST_Length(ST_Intersection(la.geom,hw.geom)::geography)/1000.0 length
from ushighways hw, uslakes la where ST_Crosses(la.geom,hw.geom) order by length desc;
"""
# %sql $query3
# -
# 3.6.12 将交通事故与高速公路基于空间距离进行关联,即距离某高速公路小于500米,认为该交通事故发生在这条高速公路上,查询哪条高速公路上的交通事故最多?由于交通事故较多,完整的查询大约需要30分钟,可以使用ST_DWithin加速距离判断,同时仅考虑在8月和9月发生的交通事故。通过display函数在OpenStreetMap中展示这些高速公路和其关联的交通事故,display函数要求查询结果模式至少包含gid,name和geom属性,其中高速公路的name为full_name(4分)<br/>
# **空间关联查询**:此类空间关联查询是数据挖掘中的常见方法,应用较为广泛,如[道路与车辆关联](https://www.csdn.net/article/2015-01-23/2823687-geographic-space-base-Hadoop)分析道路拥堵情况?
# +
# 查询满足题意的高速公路
query1 = """
select hw.gid, hw.full_name as name, hw.geom, count(*) c from (select * from usaccidents where month = 8 or month = 9) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid order by c desc limit 1;
"""
# result1 = %sql $query1
print(result1[0].gid)
# 查询该高速公路上的交通事故
query2 = """
select ua.gid, hw1.name,ua.geom from
(select hw.gid, hw.full_name as name, hw.geom, count(*) c
from (select * from usaccidents where month = 8 or month = 9) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid order by c desc limit 1) hw1, (select * from usaccidents where month = 8 or month = 9) ua
where ST_DWithin(hw1.geom::geography, ua.geom::geography,500);
"""
# result2 = %sql $query2
display([result1, result2], "map9", 4, 1)
# -
# 3.6.13 导入加州cal的shapefile文件,计算加州的经纬度范围,在该范围内,基于练习4网格生成方法构建$50\times46$,即X方向50,Y方向46,网格与加州多边形求交,即边界上的方格仅保留与加州相交的部分,统计每个方格内的交通事故数目,通过choroplethMap进行可视化,choroplethMap函数要求查询结果模式至少包含gid,name,geom和value属性,其中value为其内的交通事故数目,可以通过with语句简化SQL语句(4分)<br/>
# **空间网格关联查询**:此类空间网格关联查询在实际应用中较为常见,如滴滴和Uber基于六边形网格统计打车人数,进行实时调价,[Uber Deck Grid and Hexagon layers](https://eng.uber.com/deck-gl-4-0/)
# <img src="hexagon.jpg">
# %sql select * from cal;
# %sql select UpdateGeometrySRID('cal', 'geom', 4326);
# +
query = """
select x || '0' || y as gid, x || ' ' || y as name, grid1.geom ,count(*) as value
from usaccidents ua,
(WITH
usext AS (
SELECT
ST_SetSRID(CAST(ST_Extent(geom) AS geometry),
4326) AS geom_ext, 50 AS x_gridcnt, 46 AS y_gridcnt
FROM cal
),
grid_dim AS (
SELECT
(
ST_XMax(geom_ext)-ST_XMin(geom_ext)
) / x_gridcnt AS g_width,
ST_XMin(geom_ext) AS xmin, ST_xmax(geom_ext) AS xmax,
(
ST_YMax(geom_ext)-ST_YMin(geom_ext)
) / y_gridcnt AS g_height,
ST_YMin(geom_ext) AS ymin, ST_YMax(geom_ext) AS ymax
FROM usext
),
grid AS (
SELECT
x, y,
ST_MakeEnvelope(
xmin + (x - 1) * g_width, ymin + (y - 1) * g_height,
xmin + x * g_width, ymin + y * g_height,
4326
) AS grid_geom
FROM
(SELECT generate_series(1,x_gridcnt) FROM usext) AS x(x)
CROSS JOIN
(SELECT generate_series(1,y_gridcnt) FROM usext) AS y(y)
CROSS JOIN
grid_dim
)
SELECT
g.x x, g.y y,
ST_Intersection(s.geom, grid_geom) AS geom
FROM cal AS s INNER JOIN grid AS g
ON ST_Intersects(s.geom,g.grid_geom)) grid1
where ST_Within(ua.geom,grid1.geom)
group by grid1.x,grid1.y,grid1.geom;
"""
# result = %sql $query
choroplethMap(result, "map10", 6, 1)
# -
# 3.6.14 查询在加州范围内的交通事故,通过heatMap进行可视化,heatMap函数要求查询结果模式至少包含gid,name和geom属性,其中name可为任意值,并对比ChoroplethMap与HeatMap在地理空间数据可视化方面的异同(3分)<br/>
# **数据可视化**:利用人眼的感知能力对数据进行交互的可视表达以增强认知的技术,是数据分析的有效手段,如[Uber数据可视化](http://dataunion.org/24227.html)
# +
query = """
select gid, tway_id as name, geom from usaccidents where ST_Within(geom, (select geom from cal ));
"""
# result = %sql $query
heatMap(result, "map11", 4, 1)
# + active=""
# ChoroplethMap与HeatMap在地理空间数据可视化方面的异同:
# 同:都可以直观地表示出数据的地理空间分布特性,如疏密程度或频率高低等;
# 不同:
# ChoroplethMap需要进行空间网格划分,相对绘制较为复杂,但在数据表现的准确性高;
# HeatMap可以简单地聚合大量数据,并使用渐进的色带来表现,相对更为直观,但在数据表现的准确性相对较低。
# -
# ### 3.7 酒驾交通事故分析
#
# 美国交通局:We' re directly soliciting your help to better understand what these data are telling us. Whether you' re a non-profit, a tech company, or just a curious citizen wanting to contribute to the conversation in your local community, we want you to jump in and help us understand what the data are telling us. Some key questions worth exploring:
# * How might improving economic conditions around the country change how Americans are getting around? What models can we develop to identify communities that might be at a higher risk for fatal crashes?
# * How might climate change increase the risk of fatal crashes in a community?
# * How might we use studies of attitudes toward speeding, distracted driving, and seat belt use to better target marketing and behavioral change campaigns?
# * How might we monitor public health indicators and behavior risk indicators to target communities that might have a high prevalence of behaviors linked with fatal crashes (drinking, drug use/addiction, etc.)? What countermeasures should we create to address these issues?
#
# <img src="drunk.jpg">
#
# 美国交通局在2018年开展了[Visualize Transportation Safety](https://www.transportation.gov/solve4safety)可视化挑战赛。
# 3.7.1 酒驾是否在周末更容易发生?构造SQL语句查询工作日平均每日酒驾事件数与周末平均每日酒驾事件数(avg_weekday_count, avg_weekend_count),保留到小数点后4位,分析查询结果给出结论,注意中美文化差异中星期起始日的差别(3分)<br/>
# + language="sql"
# select round( ((select sum(count)::float from (select count(*) from usaccidents where drunk_dr > 0 and day_week between 2 and 6) s)/
# (select count(*)::float from (select day,month,year from usaccidents where day_week between 2 and 6 group by day,month,year) c))::numeric,4)
# avg_weekday_count,
# round( ((select sum(count)::float from (select count(*) from usaccidents where drunk_dr > 0 and day_week not between 2 and 6) s)/
# (select count(*)::float from (select day,month,year from usaccidents where day_week not between 2 and 6 group by day,month,year) c))::numeric,4)
# avg_weekend_count
# ;
# + active=""
# 结论:由于周末平均每日酒驾事件数显著高于工作日平均每日酒驾事件数,因此可以得出酒驾是在周末更容易发生。
# -
# 3.7.2 (练习题) 酒驾交通事故在工作日和休息日在哪个时间段发生较多?构造SQL语句查询(hour, avg_weekday_count, avg_weekend_count),保留到小数点后4位,分析查询结果给出结论
# + language="sql"
# select ua.hour,
# round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 and ua1.day_week between 2 and 6) s)/
# (select count(*)::float from (select day,month,year from usaccidents where day_week between 2 and 6 group by day,month,year) c))::numeric,4)
# avg_weekday_count,
# round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 and ua1.day_week not between 2 and 6) s)/
# (select count(*)::float from (select day,month,year from usaccidents where day_week not between 2 and 6 group by day,month,year) c))::numeric,4)
# avg_weekend_count
# from usaccidents ua where hour between 0 and 23 group by hour order by hour;
# -
query1 = """
select ua.hour, round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 and ua1.day_week between 2 and 6) s)/
(select count(*)::float from (select day,month,year from usaccidents where day_week between 2 and 6 group by day,month,year) c))::numeric,4)
avg_weekday_count from usaccidents ua where hour between 0 and 23 group by hour order by hour;
"""
# result1 = %sql $query1
# %matplotlib inline
result1.bar()
query2 = """
select ua.hour, round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 and ua1.day_week not between 2 and 6) s)/
(select count(*)::float from (select day,month,year from usaccidents where day_week not between 2 and 6 group by day,month,year) c))::numeric,4)
avg_weekend_count from usaccidents ua where hour between 0 and 23 group by hour order by hour;
"""
# result2 = %sql $query2
# %matplotlib inline
result2.bar()
# + active=""
# 结论:在工作日酒驾发生的时段差异相对较小,集中在凌晨一点到三点和晚上六点至十二点的入夜时间段,其中晚六点至十二点的发生几率略高于凌晨一点到三点;在休息日则主要集中在凌晨十二点到三点发生,晚上六点至十二点发生几率相对中午时间段也较多,且仍高于工作日的发生几率。
# -
# 3.7.3 (练习题) 酒驾交通事故次数在每个小时上是否与总的交通事故次数成正比?在一天的哪些小时上,酒驾是交通事故的主要原因?构造SQL语句分析是否成正比
# + language="sql"
# select ua.hour,
# round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 ) s)/
# (select count(*)::float from (select day,month,year from usaccidents group by day,month,year) c))::numeric,4)
# avg_drunk_count,
# round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour ) s)/
# (select count(*)::float from (select day,month,year from usaccidents group by day,month,year) c))::numeric,4)
# avg_total_count
# from usaccidents ua where hour between 0 and 23 group by hour order by hour;
# +
query1 = """
select ua.hour,
round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 ) s)/
(select count(*)::float from (select day,month,year from usaccidents group by day,month,year) c))::numeric,4)
avg_drunk_count
from usaccidents ua where hour between 0 and 23 group by hour order by hour;
"""
query2 = """
select ua.hour,
round( ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour ) s)/
(select count(*)::float from (select day,month,year from usaccidents group by day,month,year) c))::numeric,4)
avg_total_count
from usaccidents ua where hour between 0 and 23 group by hour order by hour;
"""
# result1 = %sql $query1
# result2 = %sql $query2
# %matplotlib inline
result2.bar()
result1.bar()
# -
如上图所示,橙色为酒驾交通事故每小时平均次数,蓝色为总平均次数;
# + active=""
# 结论:观察可知,
# 酒驾交通事故次数在每个小时上不与总的交通事故次数成正比,且在凌晨和深夜时间段(0-3点)酒驾是交通事故的主要原因。
# -
# 3.7.4 (附加题) 分析工作日与周末在每个小时上发生的酒驾交通事故占该时段总事故数的平均比例,定义酒驾占比超过50%的时段为酒驾易发时段,构造SQL语句分析周末与非周末酒驾易发时段的差异及其主要原因(2分)
# + language="sql"
# select ua.hour,
# round( (((select sum(count)::float from
# (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 and ua1.day_week between 2 and 6)
# s)/
# (select count(*)::float from
# (select day,month,year from usaccidents where day_week between 2 and 6 group by day,month,year)
# c))/
# ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.day_week between 2 and 6 ) s)/
# (select count(*)::float from (select day,month,year from usaccidents ua1 where ua1.day_week between 2 and 6 group by day,month,year) c)))
# ::numeric,4)*100
# avg_weekday_ratios,
# round( (((select sum(count)::float from
# (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.drunk_dr > 0 and ua1.day_week not between 2 and 6)
# s)/
# (select count(*)::float from
# (select day,month,year from usaccidents where day_week not between 2 and 6 group by day,month,year)
# c))/
# ((select sum(count)::float from (select count(*) from usaccidents ua1 where ua1.hour = ua.hour and ua1.day_week not between 2 and 6 ) s)/
# (select count(*)::float from (select day,month,year from usaccidents ua1 where ua1.day_week not between 2 and 6 group by day,month,year) c)))
# ::numeric,4)*100
# avg_weekend_ratios
# from usaccidents ua where hour between 0 and 23 group by hour order by hour;
# + active=""
# 结论:由上表可知,工作日酒驾易发时段为凌晨1、2点;休息日酒驾易发时段为23点至次日4点;
# 酒驾易发时段主要都集中在凌晨,且休息日酒驾易发时段要明显长于工作日;
# 其主要原因可能是休息日驾车参加聚会饮酒的可能性较大,且饮酒后容易晚归;
# 凌晨也是驾驶最为疲劳的时间,进一步增大了饮酒后出车祸概率;
# 此时除了参加聚会归来的驾驶者很少有其他驾驶者在路上行驶,因此酒驾占比较高。
# -
# 3.7.5 分析周末与非周末哪些高速公路上发生酒驾次数有较大差异,距离某高速公路小于500米,认为该交通事故发生在这条高速公路上,通过choroplethMap进行可视化,choroplethMap函数要求查询结果模式至少包含gid,name,geom和value属性,其中value为其内的交通事故数目(3分)
# +
# 查询非周末每条高速公路上的酒驾次数
query1 = """
select hw.gid, hw.full_name as name, hw.geom, count(*) as value
from (select geom from usaccidents where drunk_dr > 0 and day_week between 2 and 6) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid order by value;
"""
# result1 = %sql $query1
# 查询周末每条高速公路上的酒驾次数
query2 = """
select hw.gid, hw.full_name as name, hw.geom, count(*) as value
from (select geom from usaccidents where drunk_dr > 0 and day_week not between 2 and 6) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid order by value;
"""
# result2 = %sql $query2
choroplethMap(result1, "map12", 4, 1)
choroplethMap(result2, "map13", 4, 1)
# -
结论
观察图表和数据,可认为周末平均每日酒驾次数和非周末平均每日酒驾次数相对误差大于0.3且绝对误差大于3为具有较大差异;
构造以下sql语句查询周末和非周末的平均每日酒驾次数可得,16条高速公路存在较大差异:
16条高速公路中周末平均每日酒驾次数均大于非周末平均每日酒驾次数。
# + language="sql"
# select weekend.name, week.value as weekvalue, weekend.value as weekendvalue, (weekend.value - week.value)::float/weekend.value::float as df
# from (select hw.gid, hw.full_name as name, count(*)/5.0 as value
# from (select geom from usaccidents where drunk_dr > 0 and day_week between 2 and 6) ua, ushighways hw
# where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
# group by hw.gid) week,
# (select hw.gid, hw.full_name as name, count(*)/2.0 as value
# from (select geom from usaccidents where drunk_dr > 0 and day_week not between 2 and 6) ua, ushighways hw
# where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
# group by hw.gid) weekend
# where week.gid = weekend.gid and ( abs((weekend.value - week.value)::float/weekend.value) > 0.3 or abs((weekend.value - week.value)::float/week.value) > 0.3 )
# and ( abs(weekend.value - week.value) > 3 )
# order by weekend.value desc;
# -
# ### 3.8 (附加题) 交通事故数据分析
#
# 基于交通事故数据及其他的相关数据,自行构思问题,通过SQL查询、Python编程、统计分析或可视化等,挖掘交通事故数据中隐含的模式和知识,提供安全驾驶建议。根据问题意义和分析质量,最高奖励5分
# - 对数据基于月份的统计:
# +
query2 = """
select month,count(*) from usaccidents group by month order by month;
"""
# result2 = %sql $query2
# %matplotlib inline
result2.bar()
# -
结论:秋季7-10月的事故发生概率最高,春季2月最低;
可能是由于二月份冬季较为寒冷下雪,通行不易,出行几率减少;
以下进一步查询在春季(2月)每公里平均车祸数和秋季(8月)平均车祸数差距较大的高速公路,
作为事故季节性高速公路:
query1 = """
select c8.gid, c8.name,c8.geom, ( c8.value-c2.value ) as value
from
(select hw.gid, hw.full_name as name, hw.geom, count(*) as value
from (select * from usaccidents where month = 8 ) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid) c8,
(select hw.gid, hw.full_name as name, hw.geom, count(*) as value
from (select * from usaccidents where month = 2 ) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid) c2
where c2.gid = c8.gid order by value;
"""
# result1 = %sql $query1
choroplethMap(result1, "map22", 4, 1)
# - 对事故造成首次伤害的事件进行统计:
# + language="sql"
# select harm_ev,count(*)::float/(select count(*) from usaccidents)*100.0 count from usaccidents group by harm_ev order by count desc limit 5;
# -
# %sql select sum(count) from (select harm_ev,count(*)::float/(select count(*) from usaccidents)*100.0 count from usaccidents group by harm_ev order by count desc limit 5) c;
# +
import matplotlib.pyplot as plt
labels = ['Collision with Motor Vehicle In-Transport','Collision with Pedestrian','Rollover/Overturn','Collision with Tree (Standing Only)','Collision with Curb','other']
sizes = [38.05,15.49,8.78,7.14,3.39,27.15]
plt.pie(sizes,labels=labels,autopct='%1.1f%%',shadow=False,startangle=150)
plt.title("The First Harmful Event")
plt.show()
# + active=""
# 结论:
# 对事故造成首次伤害的事件进行统计,排名前5的是:
# Collision with Motor Vehicle In-Transport:38.05%
# Collision with Pedestrian:15.49%
# Rollover/Overturn:8.78%
# Collision with Tree (Standing Only):7.14%
# Collision with Curb:3.39%
# 这五种事件占据了总事件数的72.85%
# -
# - 对发生事故时的天气和光照进行统计
# + language="sql"
# select weather,count(*)::float/(select count(*) from usaccidents)*100.0 count from usaccidents where weather != 0 group by weather order by count desc limit 5;
# -
对发生事故时的天气进行统计,取数量最多的前5位:
Clear:71.20%
Cloudy:17.04%
Rain:7.66%
Fog, Smog, Smoke:1.25%
Snow:0.98%
结论:绝大多数的事故都发生在天气良好的情况下;
# + language="sql"
# select lgt_cond,count(*)::float/(select count(*) from usaccidents)*100.0 count from usaccidents where lgt_cond != 9 group by lgt_cond order by count desc limit 5;
# -
结论:大多数的事故都发生在光照良好的情况下;
# - 对发生事故的道路种类进行统计
# %sql select route,count(*)::float/(select count(*) from usaccidents where route != 9)*100.0 "the route signing of the trafficway" from usaccidents where route != 9 group by route order by "the route signing of the trafficway" desc limit 3;
query2 = """
select route,count(*)::float/(select count(*) from usaccidents where route != 9)*100.0 "the route signing of the trafficway" from usaccidents where route != 9 group by route order by "the route signing of the trafficway" desc;
"""
# result2 = %sql $query2
# %matplotlib inline
result2.bar()
1 Interstate
2 U.S. Highway
3 State Highway
4 County Road
5 Local Street - Township
6 Local Street - Municipality
7 Local Street - Frontage Road
8 Other
结论:
发生事故前三位的道路种类为:
State Highway 30.53%
Local Street - Municipality 17.81%
U.S. Highway 16.7%
发生在高速公路上的事故总数量是最多的;
# - 对事故涉及车数目
# + language="sql"
# select ve_total,count(*)::float/(select count(*) from usaccidents)*100.0 "the route signing of the trafficway" from usaccidents group by ve_total order by ve_total limit 5;
#
# -
结论:单车驾驶产生的事故占大多数。
# - 分别统计每条高速公路上紧急救援到达的平均小时数并绘图:
# - Arrival Time (Hour) EMS is the time Emergency Medical Service arrived on the crash scene.
#
query1 = """
select hw.gid, hw.full_name as name,hw.geom, avg(ua.d_hour) as value,count(*) as c
from ( select (arr_hour - hour) as d_hour,geom from usaccidents where arr_hour != 99 and arr_hour != 88 and hour != 99 and arr_hour - hour > 0 ) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid having count(*) > 3 order by value desc;
"""
# result1 = %sql $query1
choroplethMap(result1, "map20", 4, 1)
# 结论:可以认为平均医疗救援时长越大的高速公路,在出车祸时伤员得不到及时的救助的可能性越高,即危险性越高;选取平均医疗救援时长排名前十的高速公路作为在医疗救援方面风险较高的高速公路;
# (忽略可统计次数过少的高速公路)
# + language="sql"
# select hw.gid, hw.full_name as name, avg(ua.d_hour) as value,count(*) as c
# from ( select (arr_hour - hour) as d_hour,geom from usaccidents where arr_hour != 99 and arr_hour != 88 and hour != 99 and arr_hour - hour > 0 ) ua, ushighways hw
# where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
# group by hw.gid having count(*) > 4 order by value desc limit 10;
# -
# - 查询平均每公里事故数最多的高速公路:
# 类似 3.6.12 将交通事故与高速公路基于空间距离进行关联,即距离某高速公路小于500米,认为该交通事故发生在这条高速公路上;查询哪条高速公路上的交通事故最多。使用ST_DWithin加速距离判断,同时仅考虑在8月和9月发生的交通事故。
query1 = """
select hw.gid, hw.full_name as name, hw.geom, count(*)/( ST_Length(hw.geom::geography) /1000.0 ) as value from (select * from usaccidents where month between 8 and 9) ua, ushighways hw
where ST_DWithin(hw.geom::geography, ua.geom::geography,500)
group by hw.gid order by value ;
"""
# result1 = %sql $query1
choroplethMap(result1, "map21", 4, 1)
结论:观察图可知,平均每公里事故数最多的高速公路
多位于两个高速公路连接处的次级高速公路;
# - 使用apriori算法进行数据关联分析:
# 选取 ve_total,hour,route,lgt_cond,weather1,harm_ev 等项;
对所有数据进行数据关联分析:
# +
import pandas as pd
query1 = """
select ve_total,hour,route,lgt_cond,weather1,harm_ev from usaccidents;
"""
index = ["ve_total","hour","route","lgt_cond","weather1","harm_ev"]
# result1 = %sql $query1
df = pd.DataFrame(result1)
print(len(df.index))
for i in range(0,len(index)):
df[i] = df[i].map(lambda x: str(x)+"_"+index[i])
from mlxtend.preprocessing import TransactionEncoder
def deal(data):
return data.dropna().tolist()
df_arr = df.apply(deal,axis=1).tolist()
te = TransactionEncoder() # 定义模型
df_tf = te.fit_transform(df_arr)
df = pd.DataFrame(df_tf,columns=te.columns_)
from mlxtend.frequent_patterns import apriori
frequent_itemsets = apriori(df,min_support=0.005,use_colnames=True) # use_colnames=True表示使用元素名字,默认的False使用列名代表元素
# -
# - 选取最小支持率为0.001;
# - 以杠杆率排序:
# 
# - 去除置信度小于0.6的项
from mlxtend.frequent_patterns import association_rules
association_rule = association_rules(frequent_itemsets,metric='confidence',min_threshold=0.6)
association_rule.sort_values(by='leverage',ascending=False,inplace=True)
#print(association_rule.head())
#对每一种事故类型分别获取关联分析结果
for i in range(1,75):
pdf = association_rule[association_rule["consequents"] == frozenset({str(i)+'.0_harm_ev'})]
if pdf.shape[0] != 0:
print('\n\n')
print(i)
print(pdf.shape[0])
print(pdf.head())
结论:从上述可得:
事故种类中绝大部分汽车相互碰撞还是在晴天、光照良好条件下发生的,因此不可因为视野和天气条件好就掉以轻心;
在州高速公路上的汽车相互碰撞占据了大部分;
在刚入夜的时候(19时至20时)撞击行人的概率较大;
# ### 作业感想
#
# 收获:-),疑惑:-|,吐槽:-(,...,你的反馈很重要
# + active=""
# 感觉分析还不是很完善...
# 应该还可以对某些特别的事件分别进行数据关联分析,如酒驾、校车事故等
|
地理空间数据库/homework/hw2/作业2_3180102760_郑昱笙.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Training Neural Networks with Keras
#
# ### Goals:
# - Intro: train a neural network with `tensorflow` and the Keras layers
#
# ### Dataset:
# - Digits: 10 class handwritten digits
# - http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits
# +
# %matplotlib inline
# display figures in the notebook
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
digits = load_digits()
# -
sample_index = 45
plt.figure(figsize=(3, 3))
plt.imshow(digits.images[sample_index], cmap=plt.cm.gray_r,
interpolation='nearest')
plt.title("image label: %d" % digits.target[sample_index]);
# ## Train / Test Split
#
# Let's keep some held-out data to be able to measure the generalization performance of our model.
# +
from sklearn.model_selection import train_test_split
data = np.asarray(digits.data, dtype='float32')
target = np.asarray(digits.target, dtype='int32')
X_train, X_test, y_train, y_test = train_test_split(
data, target, test_size=0.15, random_state=37)
# -
# ## Preprocessing of the Input Data
#
#
# Make sure that all input variables are approximately on the same scale via input normalization:
# +
from sklearn import preprocessing
# mean = 0 ; standard deviation = 1.0
scaler = preprocessing.StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# print(scaler.mean_)
# print(scaler.scale_)
# -
# Let's display the one of the transformed sample (after feature standardization):
sample_index = 45
plt.figure(figsize=(3, 3))
plt.imshow(X_train[sample_index].reshape(8, 8),
cmap=plt.cm.gray_r, interpolation='nearest')
plt.title("transformed sample\n(standardization)");
# The scaler objects makes it possible to recover the original sample:
plt.figure(figsize=(3, 3))
plt.imshow(scaler.inverse_transform(X_train[sample_index]).reshape(8, 8),
cmap=plt.cm.gray_r, interpolation='nearest')
plt.title("original sample");
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
# ## Preprocessing of the Target Data
#
#
# To train a first neural network we also need to turn the target variable into a vector "one-hot-encoding" representation. Here are the labels of the first samples in the training set encoded as integers:
y_train[:3]
# Keras provides a utility function to convert integer-encoded categorical variables as one-hot encoded values:
# +
from tensorflow.keras.utils import to_categorical
Y_train = to_categorical(y_train)
Y_train[:3]
# -
# ## Feed Forward Neural Networks with Keras
#
# Objectives of this section:
#
# - Build and train a first feedforward network using `Keras`
# - https://www.tensorflow.org/guide/keras/overview
# - Experiment with different optimizers, activations, size of layers, initializations
#
# ### A First Keras Model
# We can now build an train a our first feed forward neural network using the high level API from keras:
#
# - first we define the model by stacking layers with the right dimensions
# - then we define a loss function and plug the SGD optimizer
# - then we feed the model the training data for fixed number of epochs
# +
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras import optimizers
input_dim = X_train.shape[1]
hidden_dim = 100
output_dim = 10
model = Sequential()
model.add(Dense(hidden_dim, input_dim=input_dim, activation="tanh"))
model.add(Dense(output_dim, activation="softmax"))
model.compile(optimizer=optimizers.SGD(lr=0.1),
loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(X_train, Y_train, validation_split=0.2, epochs=15, batch_size=32)
# -
# ### Visualizing the Convergence
history.history
history.epoch
# Let's wrap this into a pandas dataframe for easier plotting:
# +
import pandas as pd
history_df = pd.DataFrame(history.history)
history_df["epoch"] = history.epoch
history_df
# -
fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True, figsize=(12, 6))
history_df.plot(x="epoch", y=["loss", "val_loss"], ax=ax0)
history_df.plot(x="epoch", y=["accuracy", "val_accuracy"], ax=ax1);
# ### Monitoring Convergence with Tensorboard
#
# Tensorboard is a built-in neural network monitoring tool.
# %load_ext tensorboard
# !rm -rf tensorboard_logs
# +
import datetime
from tensorflow.keras.callbacks import TensorBoard
model = Sequential()
model.add(Dense(hidden_dim, input_dim=input_dim, activation="tanh"))
model.add(Dense(output_dim, activation="softmax"))
model.compile(optimizer=optimizers.SGD(lr=0.1),
loss='categorical_crossentropy', metrics=['accuracy'])
timestamp = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
log_dir = "tensorboard_logs/" + timestamp
tensorboard_callback = TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=X_train, y=Y_train, validation_split=0.2, epochs=15,
callbacks=[tensorboard_callback]);
# -
# %tensorboard --logdir tensorboard_logs
# ### b) Exercises: Impact of the Optimizer
#
# - Try to decrease the learning rate value by 10 or 100. What do you observe?
#
# - Try to increase the learning rate value to make the optimization diverge.
#
# - Configure the SGD optimizer to enable a Nesterov momentum of 0.9
#
# **Notes**:
#
# The keras API documentation is available at:
#
# https://www.tensorflow.org/api_docs/python/tf/keras
#
# It is also possible to learn more about the parameters of a class by using the question mark: type and evaluate:
#
# ```python
# optimizers.SGD?
# ```
#
# in a jupyter notebook cell.
#
# It is also possible to type the beginning of a function call / constructor and type "shift-tab" after the opening paren:
#
# ```python
# optimizers.SGD(<shiff-tab>
# ```
# +
# optimizers.SGD?
# -
# +
# # %load solutions/keras_sgd_and_momentum.py
# -
# - Replace the SGD optimizer by the Adam optimizer from keras and run it
# with the default parameters.
#
# Hint: use `optimizers.<TAB>` to tab-complete the list of implemented optimizers in Keras.
#
# - Add another hidden layer and use the "Rectified Linear Unit" for each
# hidden layer. Can you still train the model with Adam with its default global
# learning rate?
# +
# # %load solutions/keras_adam.py
# -
# ### Exercises: Forward Pass and Generalization
#
# - Compute predictions on test set using `model.predict_classes(...)`
# - Compute average accuracy of the model on the test set: the fraction of test samples for which the model makes a prediction that matches the true label.
# +
# # %load solutions/keras_accuracy_on_test_set.py
# -
# ### numpy arrays vs tensorflow tensors
#
# In the previous exercises we used `model.predict_classes(...)` that returns a numpy array:
predicted_labels_numpy = model.predict_classes(X_test)
predicted_labels_numpy
type(predicted_labels_numpy), predicted_labels_numpy.shape
# Alternatively one can directly call the model on the data to get the laster layer (softmax) outputs directly as a tensorflow Tensor:
predictions_tf = model(X_test)
predictions_tf[:5]
type(predictions_tf), predictions_tf.shape
# We can use the tensorflow API to check that for each row, the probabilities sum to 1:
# +
import tensorflow as tf
tf.reduce_sum(predictions_tf, axis=1)[:5]
# -
# We can also extract the label with the highest probability using the tensorflow API:
predicted_labels_tf = tf.argmax(predictions_tf, axis=1)
predicted_labels_tf[:5]
# We can compare those labels to the expected labels to compute the accuracy with the Tensorflow API. Note however that we need an explicit cast from boolean to floating point values to be able to compute the mean accuracy when using the tensorflow tensors:
accuracy_tf = tf.reduce_mean(tf.cast(predicted_labels_tf == y_test, tf.float64))
accuracy_tf
# Also note that it is possible to convert tensors to numpy array if one prefer to use numpy:
accuracy_tf.numpy()
predicted_labels_tf[:5]
predicted_labels_tf.numpy()[:5]
(predicted_labels_tf.numpy() == y_test).mean()
# ## Home Assignment: Impact of Initialization
#
# Let us now study the impact of a bad initialization when training
# a deep feed forward network.
#
# By default Keras dense layers use the "Glorot Uniform" initialization
# strategy to initialize the weight matrices:
#
# - each weight coefficient is randomly sampled from [-scale, scale]
# - scale is proportional to $\frac{1}{\sqrt{n_{in} + n_{out}}}$
#
# This strategy is known to work well to initialize deep neural networks
# with "tanh" or "relu" activation functions and then trained with
# standard SGD.
#
# To assess the impact of initialization let us plug an alternative init
# scheme into a 2 hidden layers networks with "tanh" activations.
# For the sake of the example let's use normal distributed weights
# with a manually adjustable scale (standard deviation) and see the
# impact the scale value:
# +
from tensorflow.keras import initializers
normal_init = initializers.TruncatedNormal(stddev=0.01)
model = Sequential()
model.add(Dense(hidden_dim, input_dim=input_dim, activation="tanh",
kernel_initializer=normal_init))
model.add(Dense(hidden_dim, activation="tanh",
kernel_initializer=normal_init))
model.add(Dense(output_dim, activation="softmax",
kernel_initializer=normal_init))
model.compile(optimizer=optimizers.SGD(lr=0.1),
loss='categorical_crossentropy', metrics=['accuracy'])
# -
model.layers
# Let's have a look at the parameters of the first layer after initialization but before any training has happened:
model.layers[0].weights
w = model.layers[0].weights[0].numpy()
w
w.std()
b = model.layers[0].weights[1].numpy()
b
# +
history = model.fit(X_train, Y_train, epochs=15, batch_size=32)
plt.figure(figsize=(12, 4))
plt.plot(history.history['loss'], label="Truncated Normal init")
plt.legend();
# -
# Once the model has been fit, the weights have been updated and notably the biases are no longer 0:
model.layers[0].weights
# #### Questions:
#
# - Try the following initialization schemes and see whether
# the SGD algorithm can successfully train the network or
# not:
#
# - a very small e.g. `stddev=1e-3`
# - a larger scale e.g. `stddev=1` or `10`
# - initialize all weights to 0 (constant initialization)
#
# - What do you observe? Can you find an explanation for those
# outcomes?
#
# - Are more advanced solvers such as SGD with momentum or Adam able
# to deal better with such bad initializations?
# +
# # %load solutions/keras_initializations.py
# +
# # %load solutions/keras_initializations_analysis.py
|
labs/01_keras/Intro Keras.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
import requests
from bs4 import BeautifulSoup
from operator import attrgetter
src_getter = attrgetter('src')
# +
site = '/Users/timpierson/arity/fauveal-subspace/python/munger/notebooks/wikiArt.html'
with open (site, 'r') as f:
soup = BeautifulSoup(f, 'html.parser')
img_tags = soup.find_all('img')
urls = []
for img in img_tags:
try:
urls.append(img['src'])
except:
pass
urls = [url.split("!")[0] for url in urls]
# urls = [img['src'] for img in img_tags if "src" in img]
for url in urls:
filename = re.search(r'/([\w_-]+[.](jpg|gif|png))$', url)
if not filename:
print("Regex didn't match with the url: {}".format(url))
continue
with open("./images/" + filename.group(1), 'wb') as f:
if 'http' not in url:
# sometimes an image source can be relative
# if it is provide the base url which also happens
# to be the site variable atm.
url = '{}{}'.format(site, url)
response = requests.get(url)
f.write(response.content)
# -
|
python/munger/notebooks/DownTheImages.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img width=700px; src="../img/logoUPSayPlusCDS_990.png">
#
# <p style="margin-top: 3em; margin-bottom: 2em;"><b><big><big><big><big>Introduction to Scipy and Statsmodels libraries</big></big></big></big></b></p>
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# The SciPy library is one of the core packages that make up the SciPy stack. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization.
# ## 1. File input/output - `scipy.io`
# Scipy provides an `io` module to help load some data type. We can easily read MATLAB `.mat` files using `io.loadmat` and `io.savemat`.
from scipy.io import loadmat, savemat
a = np.ones((3, 3))
savemat('file.mat', {'a': a}) # savemat expects a dictionary
data = loadmat('file.mat', struct_as_record=True)
data['a']
# <div class="alert alert-success">
#
# <b>EXERCISE - `scipy.io`</b>:
#
# <ul>
# <li>Load the matfile from `data/spectra.mat` using `scipy.io.loadmat`.</li>
# <li>Extract from the loaded dictionary two variables (`spectra`, `frequency`). You should call `ravel` the `frequency` array to obtain a 1-D array.</li>
# <li>Plot the spectra in function of the frequency.</li>
# </ul>
#
# </div>
data = loadmat('data/spectra.mat', struct_as_record=True)
data
data['spectra']
data['frequency'].shape
data['frequency'][0,:].shape
import numpy as np
np.ravel(data['frequency'], 'C').shape
data['spectra'].shape
data['spectra'][0,:]
plt.plot( data['frequency'][0,:], data['spectra'][0,:])
plt.plot( data['frequency'][0,:], data['spectra'][1,:])
for i in range(data['spectra'].shape[0]):
plt.plot( data['frequency'][0,:], data['spectra'][i,:])
plt.plot( data['frequency'][0,:], data['spectra'][1,:])
data['frequency'][0,:].astype('int')
# ## 2. Signal interpolation - `scipy.interpolate`
# The scipy.interpolate is useful for fitting a function from experimental data and thus evaluating points where no measure exists. By imagining experimental data close to a sine function:
measured_time = np.linspace(0, 1, 10)
noise = (np.random.random(10)*2 - 1) * 1e-1
measures = np.sin(2 * np.pi * measured_time) + noise
# The `scipy.interpolate.interp1d` class can build a linear interpolation function:
from scipy.interpolate import interp1d
linear_interp = interp1d(measured_time, measures)
# Then the `scipy.interpolate.linear_interp` instance needs to be evaluated at the time of interest:
computed_time = np.linspace(0, 1, 50)
linear_results = linear_interp(computed_time)
# A cubic interpolation can also be selected by providing the `kind` optional keyword argument:
cubic_interp = interp1d(measured_time, measures, kind='cubic')
cubic_results = cubic_interp(computed_time)
# Let's see the difference by plotting the results.
plt.plot(measured_time, measures, 'or', label='Measures')
plt.plot(computed_time, linear_results, label='Linear interpolation')
plt.plot(computed_time, cubic_results, label='Cubic interpolation')
plt.legend()
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.show()
# <div class="alert alert-success">
#
# <b>EXERCISE - `scipy.interpolate`</b>:
#
# <ul>
# <li>Interpolate each spectra values corresponding to the integral frequencies {401, 402, ..., 3999} using `scipy.interpolate.interp1d`.</li>
# <li>Plot the spectra in function of the frequencies.</li>
# </ul>
#
# </div>
freq =data['frequency'][0,:]
data['frequency'][0,:]
from scipy.interpolate import interp1d
linear_interp = interp1d(freq, data['spectra'][0,:])
#print x values i.e. frequencies
print(linear_interp.x)
# print y values i.e. spectra
print(linear_interp.y)
freq_int =data['frequency'][0,:].astype('int')
freq_int[0]=401
linear_results = linear_interp(freq_int)
linear_results
plt.plot( data['frequency'][0,:], data['spectra'][0,:])
plt.plot( freq_int, linear_results)
nspectra = data['spectra'].shape[0]
nfreq = data['spectra'].shape[1]
result_spectra = np.zeros((nspectra,nfreq))
result_spectra
freq_int =data['frequency'][0,:].astype('int')
freq_int[0]=401
for i in range(nspectra):
linear_interp = interp1d(freq, data['spectra'][i,:])
result_spectra[i,:] = linear_interp(freq_int)
result_spectra.shape
print(freq_int[4:7])
result_spectra[::10,4:7]
print( data['frequency'][0,::60])
for i in range(0,nspectra,50):
plt.plot( data['frequency'][0,::60], result_spectra[i,::60])
# ## 3. Optimization - `scipy.optimize`
# Optimization is the problem of finding a numerical solution to a minimization or equality.
#
# The scipy.optimize module provides useful algorithms for function minimization (scalar or multi-dimensional), curve fitting and root finding.
from scipy import optimize
# ### Finding the minimum of a scalar function
#
# Let’s define the following function:
def f(x):
return x ** 2 + 10 * np.sin(x)
# and plot it:
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x))
plt.show()
# This function has a global minimum around -1.3 and a local minimum around 3.8.
#
# The general and efficient way to find a minimum for this function is to conduct a gradient descent starting from a given initial point. The BFGS algorithm is a good way of doing this:
res = optimize.minimize(f, 0, method='L-BFGS-B')
res
# A possible issue with this approach is that, if the function has local minima the algorithm may find these local minima instead of the global minimum depending on the initial point:
res2 = optimize.minimize(f, 3, method='L-BFGS-B')
res2
# If we don’t know the neighborhood of the global minimum to choose the initial point, we need to resort to costlier global optimization. To find the global minimum, we use `scipy.optimize.basinhopping()` (which combines a local optimizer with stochastic sampling of starting points for the local optimizer):
optimize.basinhopping(f, 3, niter=1000)
# ### Finding the roots of a scalar function
#
# To find a root, i.e. a point where $f(x) = 0$, of the function f above we can use for example `scipy.optimize.fsolve()`:
root = optimize.fsolve(f, 1) # our initial guess is 1
root
# Note that only one root is found. Inspecting the plot of f reveals that there is a second root around -2.5. We find the exact value of it by adjusting our initial guess:
root2 = optimize.fsolve(f, -2.5)
root2
# ### Curve fitting
#
# Suppose we have data sampled from $f$ with some noise:
xdata = np.linspace(-10, 10, num=100)
ydata = f(xdata) + np.random.normal(0, 2, xdata.shape)
# Now if we know the functional form of the function from which the samples were drawn ($x^2 + \sin(x)$ in this case) but not the amplitudes of the terms, we can find those by least squares curve fitting. First we have to define the function to fit:
def f2(x, a, b):
return a*x**2 + b*np.sin(x)
# Then we can use `scipy.optimize.curve_fit()` to find $a$ and $b$:
guess = [2, 2]
params, params_covariance = optimize.curve_fit(f2, xdata, ydata, guess)
params
# ### Summary in a single plot
x = np.arange(-10, 10, 0.1)
plt.plot(xdata, ydata)
# plot the local minima
plt.plot(res.x, f(res.x), 'or', label='minimum')
plt.plot(res2.x, f(res2.x), 'or')
# plot the roots
plt.plot(root, f(root), '^g', label='roots')
plt.plot(root2, f(root2), '^g')
# plot the curved fitted
plt.plot(x, f2(x, params[0], params[1]), '--', label='fitted')
plt.legend()
plt.show()
# <div class="alert alert-success">
#
# <b>EXERCISE - `scipy.optimize`</b>:
#
# The previous spectra can be modelled using a simple function `model_bi_functions` which we defined as:
#
# <br><br>
#
# $$
# S(f)=\left\{
# \begin{array}{ll}
# a f + b, & 0 < f < \mu - 3 \sigma \\
# (a (\mu - 3 \sigma) + b) + \exp\left( - \frac{(f - \mu)^{2}}{2 \sigma^{2}} \right), & f \geq \mu - 3 \sigma\\
# \end{array}
# \right.
# $$
#
# See below a plot which illustrate the profile of this function.
#
# <ul>
# <li>Using `scipy.optimize.curve_fit`, fit `model_bi_functions` in the first spectra from `spectra_interp`. You also have to use `frequency_interp` as `x` values. Use the initial parameters `[0.0, 0.01, 100, 3300, 300]`</li>
# <li>Plot the results.</li>
# </ul>
#
# </div>
# +
# import helper regarding normal distribution
from scipy.stats import norm
def find_nearest_index(array, value):
"""Find the nearest index of a value in an array."""
idx = (np.abs(array - value)).argmin()
return idx
def model_bi_functions(freqs, a=1e-5, b=0.01,
scale=100, mu=3300, sigma=300):
"""Model to be fitted.
It corresponds to a line from [0, f0] and a
Normal distribution profile from [f0, end].
Parameters
----------
freqs : ndarray, shape (n_freqs,)
Frequencies for which the spectrum will be calculated
a : float, (default=1e-5)
Slope of the line.
b : float, (default=0.01)
Values where the line cut the y-axis.
scale : float, (default=100)
Scaling factor for the amplitude of the Gaussian profile.
mu : float, (default=3300)
Central value of the Gaussian profile.
sigma : float, (default=300)
Standard deviation of the Gaussian profile.
"""
y = np.zeros(freqs.shape)
# find the index of the inflexion point
f0_idx = find_nearest_index(freqs, mu - 3 * sigma)
# line equation
y[:f0_idx] = a * freqs[:f0_idx] + b
# Gaussian profile
y[f0_idx:] = ((a * freqs[f0_idx] + b) +
(scale * norm.pdf(freqs[f0_idx:], mu, sigma)))
return y
# -
y = model_bi_functions(frequency_interp)
plt.plot(frequency_interp, y)
plt.xlabel('Frequency')
plt.ylabel('Amplitude')
# ## 4. Numerical integration - `scipy.integrate`
# Given a function object, the most generic integration routine is `scipy.integrate.quad()`.
from scipy.integrate import quad
res, err = quad(np.sin, 0, np.pi / 2)
res
# If only fixed sample are given, the trapeze method (`scipy.integrate.trapz()`) or Simpson's integration rule `scipy.integrate.simps()`) can be used.
x = np.linspace(0, np.pi / 2, num=200)
y = np.sin(x)
from scipy.integrate import simps
res = simps(y, x)
res
# <div class="alert alert-success">
#
# <b>EXERCISE - `scipy.integrate`</b>:
#
# We would be interested in the area under the Gaussian profile since it is related to what we want to quantify.
#
# <ul>
# <li>Using `scipy.integrate.simps`, compute the area under the Gaussian profile between $[\mu - 3 \sigma, \mu + 3 \sigma]$. Those parameters can be found as the results of the curve fitting previusly done. The indexes corresponding to the interval values can be computed using `find_nearest_index`.</li>
# <li>You can do the same using the original data to see the difference od quantification.</li>
# </ul>
#
# </div>
# ## 5. Linear algebra - `scipy.linalg`
# The `scipy.linalg` offers basic operation used in linear algebra such as inverse (`scipy.linalg.inv`), pseudo-inverse (`scipy.linalg.pinv`), determinant (`scipy.linalg.det`) as well as decompostion as standard decompisition as SVD, QR, or Cholesky among others.
# <div class="alert alert-warning">
#
# <b>`np.array` vs. `np.matrix`:</b>
#
# <br><br>
#
# By default the multiplication between two `np.array` (i.e. `*` operator) do not lead to a matrix multiplication. You need to use `np.dot` to perform this operation.
#
# <br><br>
#
# Another possibility is to convert the `np.array` to `np.matrix` which perform this operation when using the operator `*`. The operations become more readable when there is a lot of algebric operations involved.
#
# <br><br>
#
# We illustrate this behaviour in the example below.
#
# </div>
# Let's declare two arrays of shape $3 \times 3$ and $3 \times 1$, respectively.
# +
A = np.array([[ 3, 3, -1],
[ 2, -3, 4],
[-1, .5, -1]])
b = np.array([[ 1],
[-2],
[ 0]])
# -
# Using the `*` operator does not lead to a matrix multiplication since the matrix returned is a $3 \times 3$ matrix. Instead, it multiply each column of $A$ by the vector $b$.
A * b
# You need to use the function `np.dot` to obtain the matrix multiplication.
np.dot(A, b)
# However, by converting $A$ and $b$ to matrices (i.e., `np.matrix`), it is possible to use the `*` operator directly.
# +
A = np.matrix(A)
b = np.matrix(b)
A * b
# -
# <div class="alert alert-success">
#
# <b>EXERCISE - `scipy.linalg`</b>:
#
# <ul>
# <li>Solve the following system of linear equations using the normal equation.</li>
# </ul>
# <br>
#
# $$
# \left[\begin{array}{cc}
# 3x & 3y & -z \\
# 2x & -3y & 4z \\
# -x & .5y & -z
# \end{array}\right]
# \left[\begin{array}{cc}
# x_1 \\
# x_2 \\
# x_3
# \end{array}\right] =
# \left[\begin{array}{cc}
# -1 \\
# -2 \\
# 0
# \end{array}\right]
# $$
#
# This problem can be seen as:
# $$ A x = b $$
#
# $x$ can be find such that:
#
# $$ x = (A^{T} A)^{-1} A^{T} b $$
#
# Find $x$ using the above equation
#
# </div>
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Solve the following system of linear equations using SVD.</li>
# </ul>
# <br>
#
# The above problem can also be solved using an SVD decomposition such that:
#
# $$ x = V S^{-1} (U^{T} b) $$
#
# where $U$, $S$, and $V^{T}$ can be found with `scipy.linalg.svd` such that:
# `U, S, Vh = svd(A)`
#
# </div>
# ## 6. Statistics - `scipy.stats` and `statsmodel`
# ### `scipy.stats`
# `scipy.stats` contains mainly helper of most common [continuous](https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions) and [discrete](https://docs.scipy.org/doc/scipy/reference/stats.html#discrete-distributions) distribution.
#
# In addition, this module contain statistical functions to perform statistical tests for instance.
import pandas as pd
data = pd.read_csv('data/brain_size.csv', sep=';', na_values=".")
data.head()
# #### 1-sample t-test
#
# `scipy.stats.ttest_1samp()` tests if the population mean of data is likely to be equal to a given value. Let see if the VIQ of our population is equal to 0.
# +
from scipy.stats import ttest_1samp
ttest_1samp(data['VIQ'], 0)
# -
# With a p-value of $10^{-28}$ we can claim that the population mean for the IQ (VIQ measure) is not 0.
# #### 2-sample t-test
# `scipy.stats.ttest_ind()` can compare two populations and check if the difference is significant or not. We can study if there is a difference of the VIQ between Male and Female.
groupby_gender = data.groupby('Gender')
for gender, value in groupby_gender['VIQ']:
print((gender, value.mean()))
# To see if this difference is significant, we can use `scipy.stats.ttest_ind()`.
from scipy.stats import ttest_ind
female_viq = data[data['Gender'] == 'Female']['VIQ']
male_viq = data[data['Gender'] == 'Male']['VIQ']
ttest_ind(female_viq, male_viq)
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Test the difference between weights in males and females. You can fill the missing data using `pandas.fillna()` and using the mean weight of the population.</li>
# <li>Use non parametric statistics to test the difference between VIQ in males and females (refer to `scipy.stats.mannwhitneyu`).</li>
# </ul>
# <br>
#
# </div>
# ### `statsmodels`
# Given two set of observations, x and y, we want to test the hypothesis that y is a linear function of x. In other terms:
# $$
# y = x \times coef + intercept + e
# $$
# where e is observation noise. We will use the statsmodels module to:
#
# - Fit a linear model. We will use the simplest strategy, ordinary least squares (OLS).
# - Test that coef is non zero.
x = np.linspace(-5, 5, 20)
np.random.seed(1)
# normal distributed noise
y = -5 + 3 * x + 4 * np.random.normal(size=x.shape)
# Create a data frame containing all the relevant variables
data = pd.DataFrame({'x': x, 'y': y})
# Then we specify an OLS model and fit it:
from statsmodels.formula.api import ols
model = ols("y ~ x + 1", data).fit()
# We can inspect the various statistics derived from the fit:
print(model.summary())
# **Intercept:** We can remove the intercept using - 1 in the formula, or force the use of an intercept using + 1.
# Let's see another example: is VIQ can be predicted using Gender.
from statsmodels.formula.api import ols
data = pd.read_csv('data/brain_size.csv', sep=';', na_values=".")
model = ols("VIQ ~ Gender + 1", data).fit()
print(model.summary())
# <div class="alert alert-success">
#
# <b>EXERCISE</b>:
#
# <ul>
# <li>Run an OLS to check if Weight can be predicted using Gender and Height.</li>
# </ul>
# <br>
#
# </div>
|
code/04-scipy_statsmodels_introduction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Exploratory Data Analysis and Dimension Reduction
# + [markdown] slideshow={"slide_type": "slide"}
# ## Dimensionality and Features of Data
#
# - Suppose data matrix has one row per observation
# + [markdown] slideshow={"slide_type": "fragment"}
# - One column per attribute, feature, etc.
# + [markdown] slideshow={"slide_type": "fragment"}
# - Each attribute (hopefully) provide additional information
# Consider the following:
#
# <table>
# <tr style="text-align:center;background:white">
# <th style="text-align:center">Data 1</th>
# <th></th>
# <th style="text-align:center">Data 2</th>
# </tr>
# <tr>
# <td>
#
# **Age (days)**|**Height (in)**
# :-----:|:-----:
# 182|28
# 399|30
# 725|33
#
# </td>
# <td></td>
# <td>
#
# **Age (days)**|**Height (in)**|**Height (ft)**
# :-----:|:-----:|:-----:
# 182|28|2.33
# 399|30|2.5
# 725|33|2.75
#
# </td></tr> </table>
# + [markdown] slideshow={"slide_type": "fragment"}
# - Two height columns are adding the same information
# + [markdown] slideshow={"slide_type": "subslide"}
# - Number of attributes is often referred to as dimensionality of a dataset
# + [markdown] slideshow={"slide_type": "fragment"}
# - Number of attributes = number of columns
# + [markdown] slideshow={"slide_type": "fragment"}
# - Dimensions = rank of matrix
# + [markdown] slideshow={"slide_type": "fragment"}
# - Each attribute (hopefully) provide additional information
# Consider the following:
#
# <table>
# <tr style="text-align:center;background:white">
# <th style="text-align:center">Data 1</th>
# <th></th>
# <th style="text-align:center">Data 2</th>
# </tr>
# <tr>
# <td>
#
# **Age (days)**|**Height (in)**
# :-----:|:-----:
# 182|28
# 399|30
# 725|33
#
# </td>
# <td></td>
# <td>
#
# **Age (days)**|**Height (in)**|**Height (ft)**
# :-----:|:-----:|:-----:
# 182|28|2.33
# 399|30|2.5
# 725|33|2.75
#
# </td></tr>
# <tr style="text-align:center;background:white">
# <th style="text-align:center">2 dimensional (rank=2)</th>
# <th></th>
# <th style="text-align:center">Still 2 dimensional (rank=2)</th>
# </tr>
# </table>
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Dimensionality and Rank
#
# - Rank of a matrix $\approx$ dimensionality of dataset
# + [markdown] slideshow={"slide_type": "fragment"}
# - Many features does not mean data is rich:
# there may be redundant information
# + [markdown] slideshow={"slide_type": "fragment"}
# - Large matrix does not mean rank is high:
# + [markdown] slideshow={"slide_type": "fragment"}
# - Large matrix does not mean rank is high: there may be linearly dependency
# + [markdown] slideshow={"slide_type": "fragment"}
# - Linear dependency on other features:
# Some columns maybe linear combination of others
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Linear dependence and Redundant information
#
# - Linear combination of vectors:
# $$
# \frac{1}{10} \cdot \left[ \begin{array}{l}{2} \\ {3} \\ {4}\end{array}\right]+2 \cdot \left[ \begin{array}{l}{5} \\ {7} \\ {9}\end{array}\right]=\left[ \begin{array}{l}{10.2} \\ {14.3} \\ {18.4}\end{array}\right]
# $$
# + [markdown] slideshow={"slide_type": "fragment"}
# - A matrix (mxn) times a column (nx1) gives
# one linear combination of the columns of the matrix.
# + [markdown] slideshow={"slide_type": "fragment"}
# - A matrix (mxn) times a matrix (nxk) has k columns that are
# each a matrix (mxn) times a column (nx1)
# + [markdown] slideshow={"slide_type": "subslide"}
# - Two height data columns are linear combination of each other
#
# $$
# \begin{array}{|c|c|}\hline \text { Age (days) } & {\text { Height (in) }} \\ \hline 182 & {28} \\ \hline 399 & {30} \\ \hline 725 & {33} \\ \hline\end{array}
# \times
# \begin{array}{|l|l|l|}\hline 1 & {0} & {0} \\ \hline 0 & {1} & {1 / 12} \\ \hline\end{array}
# =
# \begin{array}{|c|c|c|}\hline \text { Age (days) } & {\text { Height (in) }} & {\text { Height }(\mathrm{ft})} \\ \hline 182 & {28} & {2.33} \\ \hline 399 & {30} & {2.5} \\ \hline 725 & {33} & {2.75} \\ \hline\end{array}
# $$
# + [markdown] slideshow={"slide_type": "fragment"}
# $$
# \small
# \begin{array}{|l|l|}\hline \text { width } & {\text { length }} & {\text { area }} \\ \hline 20 & {20} & {400} \\ \hline 16 & {12} & {192} \\ \hline 24 & {12} & {288} \\ \hline 25 & {24} & {600} \\ \hline\end{array}
# \times
# \begin{array}{|c|c|c|c|}\hline 1 & {0} & {0} & {2} \\ \hline 0 & {1} & {0} & {2} \\ \hline 0 & {0} & {1} & {0} \\ \hline\end{array}
# =
# \begin{array}{|l|l|l|}\hline \text { width } & {\text { length }} & {\text { area }} & {\text { perimeter }} \\ \hline 20 & {20} & {400} & {80} \\ \hline 16 & {12} & {192} & {60} \\ \hline 24 & {12} & {288} & {72} \\ \hline\end{array}
# $$
# + [markdown] slideshow={"slide_type": "fragment"}
# - What if columns are not *perfect* linear combinations?
# + [markdown] slideshow={"slide_type": "fragment"}
# - Columns may be *approximately* a linear combination of others (numerical rank)
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Linear Independence and Unique information
#
# - If two vectors are orthogonal, they cannot be used to describe each other
# + [markdown] slideshow={"slide_type": "fragment"}
# - If two vectors are orthogonal, one is *not* a linear combination of the other
# + [markdown] slideshow={"slide_type": "fragment"}
# - Orthogonal matrix $Q$: all columns are linearly independent from each other
# + [markdown] slideshow={"slide_type": "fragment"}
# - If $Q$ is also orthnormal, $Q$ is orthogonal and each column is of length 1
# -
# - Since $Q$ is orthonormal,
# $$ QQ^T = Q^TQ = I $$
# + [markdown] slideshow={"slide_type": "slide"}
# ## Rank of matrix and Singular Value Decomposition
#
# - Any matrix $X$ with rank $r$ can be written as
# 
# + [markdown] slideshow={"slide_type": "fragment"}
# - (Singular value decomposition) Choose $U$, $\Sigma$ & $V$ such that
# + [markdown] slideshow={"slide_type": "fragment"}
# - $U$ and $V$ are both orthonormal
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\Sigma$ is diagonal with $r$ non-zero elements in order from largest to smallest
# + [markdown] slideshow={"slide_type": "fragment"}
# - This matrix decomposition is called a
# + [markdown] slideshow={"slide_type": "subslide"}
# $$ X = (U\Sigma)\, V^T = W\, V^T $$
#
# - Columns of $W$ can be thought of as a set of basis
# + [markdown] slideshow={"slide_type": "fragment"}
# - Columns of $W$ are "unique" information and
# + [markdown] slideshow={"slide_type": "fragment"}
# - Columns of $V^T$ are coefficients for each piece of information
# + [markdown] slideshow={"slide_type": "fragment"}
# - Similar idea: Fourier transform of time series signal
# "Unique" information: sinusoidal basis functions
# Coefficients: contribution of each basis function
# 
# + [markdown] slideshow={"slide_type": "slide"}
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## Matrix Decompositions: Principal Components Analysis
#
# $X$: Data matrix of size $\mathbb{R}^{n\times p}$
#
# - Principal Components Analysis (PCA): $ X = Q Y $
#
# + $Q$: Orthonormal rotation matrix
# + $Y$: Rotated data matrix
# + [markdown] slideshow={"slide_type": "fragment"}
# - Rotation matrix $Q$ is computed to transform data $Y$
# + [markdown] slideshow={"slide_type": "fragment"}
# - First columns of $Y$ contain a larger proportion of _information_
# + [markdown] slideshow={"slide_type": "fragment"}
# - PCA can be described in terms of SVD factors
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Matrix Decompositions: Independent Components Analysis
#
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# $X$: Data matrix of size $\mathbb{R}^{n\times p}$
#
# - Independent Components Analysis (ICA): $ X = W Y $
# - $W$: independent components
# - $Y$: mixing coefficients
# + [markdown] slideshow={"slide_type": "fragment"}
# - Independent components matrix $W$ (hopefully) represents underlying signals
# + [markdown] slideshow={"slide_type": "fragment"}
# - Matrix $Y$ contain mixing coefficients
# + hideCode=false slideshow={"slide_type": "skip"}
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
import seaborn as sns
import pandas as pd
from sklearn.decomposition import FastICA, PCA
## Adapted from scikit-learn ICA example
np.random.seed(0)
n_samples = 2000
time = np.linspace(0, 8, n_samples)
# + slideshow={"slide_type": "slide"}
s1 = np.sin(2 * time) # Signal 1: sinusoidal signal
s2 = np.sign(np.sin(3 * time)) # Signal 2: square signal
s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal
S = np.c_[s1, s2, s3]
S += 0.2 * np.random.normal(size=S.shape) # Add noise
# + slideshow={"slide_type": "skip"}
S /= S.std(axis=0) # Standardize data
Sdf = pd.DataFrame(S)
# + hideCode=true slideshow={"slide_type": "fragment"}
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 6),)
fig.suptitle('True Component Signals')
for i, column in enumerate(Sdf.columns):
sns.lineplot(data=Sdf[column],ax=axes[i%3], hue=i)
# + slideshow={"slide_type": "slide"}
# Mix data
A = np.array([[1, 1, 1], [0.5, -2, 1.0], [-1.5, 1.0, 2.0]]) # Mixing matrix
X = np.dot(S, A.T) # Generate observations
Xdf = pd.DataFrame(X)
# + hideCode=true slideshow={"slide_type": "fragment"}
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 6),)
fig.suptitle('Data: Simulated Mixed Signals')
for i, column in enumerate(Xdf.columns):
sns.lineplot(data=Xdf[column],ax=axes[i%3], hue=i)
# + slideshow={"slide_type": "slide"}
# Singular value decomposition
U, S, V = np.linalg.svd(X, full_matrices=False)
W = U @ np.diag(S) # Matrix W: a "basis"
# + hideCode=true slideshow={"slide_type": "fragment"}
Wdf = pd.DataFrame(W)
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 6),)
fig.suptitle("Basis/Component signals from SVD")
for i, column in enumerate(Wdf.columns):
sns.lineplot(data=Wdf[column],ax=axes[i%3], hue=i)
# + slideshow={"slide_type": "slide"}
# Compute ICA
ica = FastICA(n_components=3)
S_ = ica.fit_transform(X) # Reconstruct signals
A_ = ica.mixing_ # Get estimated mixing matrix
# + hideCode=true slideshow={"slide_type": "fragment"}
Sdf_ = pd.DataFrame(S_)
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 6),)
fig.suptitle('ICA Component Signals')
for i, column in enumerate(Sdf_.columns):
sns.lineplot(data=Sdf_[column],ax=axes[i%3], hue=i)
# + slideshow={"slide_type": "slide"}
# For comparison, compute PCA
pca = PCA(n_components=3)
H = pca.fit_transform(X) # Reconstruct signals based on orthogonal components
# + hideCode=true slideshow={"slide_type": "fragment"}
Hdf = pd.DataFrame(H)
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 6),)
fig.suptitle('PCA Component Signals')
for i, column in enumerate(Hdf.columns):
sns.lineplot(data=Hdf[column],ax=axes[i%3], hue=i)
# + [markdown] slideshow={"slide_type": "slide"}
# ## ICA vs PCA
#
# - PCA aligns data to new coordinate system
# + [markdown] slideshow={"slide_type": "fragment"}
# - PCA components are orthonormal
# + [markdown] slideshow={"slide_type": "fragment"}
# - ICA finds hidden signals
# + [markdown] slideshow={"slide_type": "fragment"}
# - ICA components (signals) may not be orthogonal
# + [markdown] slideshow={"slide_type": "slide"}
# 
# + [markdown] slideshow={"slide_type": "slide"}
# 
# + [markdown] slideshow={"slide_type": "subslide"}
# ## ICA Identifiability
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## Non-negative Matrix Factorization
#
# - Assume data $X$ is $p\times n$ matrix of non-negative values
# + [markdown] slideshow={"slide_type": "fragment"}
# - e.g., images, probabilities, counts, etc
# + [markdown] slideshow={"slide_type": "fragment"}
# - NMF computes the following factorization:
# $$ \min_{W,H} \| X - WH \|_F\\
# \text{ subject to } W\geq 0,\ H\geq 0, $$
# where $W$ is ${p\times r}$ matrix and $H$ is ${r\times n}$ matrix.
# + [markdown] slideshow={"slide_type": "slide"}
# ## NMF for Image Analysis
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## NMF for Hyperspectral image analysis
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
#
# ## NMF for Topic Discovery
#
# 
#
#
# - [More NMF examples](https://www.cs.rochester.edu/u/jliu/CSC-576/NMF-tutorial.pdf)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Scikit-learn Functions
#
# - [Singular Value Decomposition](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html)
#
# - [Principal Component Analysis](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)
#
# - [Independent Component Analysis](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition)
# [Blind Source Separation](https://scikit-learn.org/stable/auto_examples/decomposition/plot_ica_blind_source_separation.html)
#
# - [Non-negative Matrix Factorization](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html#sklearn.decomposition.NMF)
# [Topic Discovery](https://scikit-learn.org/stable/auto_examples/applications/plot_topics_extraction_with_nmf_lda.html)
# [Image Analysis](https://scikit-learn.org/stable/auto_examples/decomposition/plot_faces_decomposition.html)
#
# - [Matrix Decompositions](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition)
# + [markdown] slideshow={"slide_type": "slide"}
# ## References
#
# - [A Tutorial on Principal Component Analysis, <NAME>](https://arxiv.org/abs/1404.1100)
#
# - [A Tutorial on Independent Component Analysis, <NAME>](https://arxiv.org/abs/1404.2986)
#
# - UC Berkeley's Data Science 100 lecture notes, <NAME>
#
# - [The Why and How of Nonnegative Matrix Factorization - <NAME>](https://arxiv.org/abs/1401.5226)
|
lecture-notes/07-Exploratory-Data-Analysis-and-Dimension-reduction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# Welcome to the Beta folder! Here's where you can put your Beta notebooks and run them in the protocol.
#
# Make a copy of the Template notebook and give it a concise descriptive name.
#
# Next, edit this cell to describe your notebook and the inputs and outputs.
# + deletable=true editable=true
# Put your imports here!
import os
import glob
import json
import logging
# + deletable=true editable=true
# This cell loads the configuration and contains some helper functions.
# Please don't edit it.
with open("../conf.json","r") as conf:
c=json.load(conf)
sample_id = c["sample_id"]
print("Running on sample: {}".format(sample_id))
logging.basicConfig(**c["info"]["logging_config"])
def and_log(s):
logging.info(s)
return s
j = {}
def load_notebook_output(notebook_num):
outputfile = "../{}.json".format(notebook_num)
try:
with open(outputfile, "r") as f:
result = json.load(f)
return result
except IOError:
print("Error! Couldn't find output of previous notebook at {}".format(outputfile))
return {}
# + deletable=true editable=true
# Add a line to the Beta log describing what you are running
print and_log("Running the Beta Template notebook...")
# What is the name of your notebook? Fill this out. For example, for "Template.ipynb", write "Template".
notebook_name = "Template"
# + [markdown] deletable=true editable=true
# Put your notebook code in the cells below!
# Outputs of the notebook should be logged in the "j" dictionary.
#
# Make an effort to contain all outputs within the dictionary.
# If an output cannot be stored in a dictionary (such as a PNG image), save it in the current directory.
#
# Some useful functions and variables for loading input:
# * `sample_id` : contains the current sample ID
# * `load_notebook_output()` : Takes the notebook number (non-beta notebooks only);
# returns a dictionary of that notebook's output
# * `c["dir"]["secondary"]` : Path to the secondary output for this sample
# * `c["dir"]["cohort"]` : Path to the background cohort
# * `c["dir"]["ref"]` : Path to the external-references directory
#
# Examples of these:
# + deletable=true editable=true
# Populating your output; getting the sample ID
j["example"] = "Hello World! Greetings from sample {}\n".format(sample_id)
print j["example"]
# Loading output from a previous notebook
example_info = load_notebook_output("2.2")
print "Loaded the 2.2 output json. Keys are: {}\n".format(example_info.keys())
# What files are in the secondary output for this sample?
print glob.glob(os.path.join(c["dir"]["secondary"], "ucsctreehouse-fusion-*", "*"))
print "\n"
# What is the info for the cohort?
compendium_info_file = os.path.join(c["dir"]["cohort"], "compendium_info.json")
with open(compendium_info_file, "r") as f:
print json.load(f)
# What .txt files are in the reference directory?
glob.glob(os.path.join(c["dir"]["ref"], "*.txt"))
# + [markdown] deletable=true editable=true
# Finally, this code writes the "j" output dictionary to the json document that goes with this notebook.
# For example, "Template.json".
# + deletable=true editable=true
with open("{}.json".format(notebook_name), "w") as jsonfile:
json.dump(j, jsonfile, indent=2)
print("Done.")
|
care/beta/Template.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc-hr-collapsed=true toc-nb-collapsed=true
# # Validation of FERC Form 1 Large Steam Plants
# This notebook runs sanity checks on the FERC Form 1 large steam plants table (`plants_steam_ferc1`). These are the same tests which are run by the `plants_steam_ferc1` validation tests by PyTest. The notebook and visualizations are meant to be used as a diagnostic tool, to help understand what's wrong when the PyTest based data validations fail for some reason.
# -
# %load_ext autoreload
# %autoreload 2
import sys
import pandas as pd
import sqlalchemy as sa
import pudl
import warnings
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
import matplotlib.pyplot as plt
import matplotlib as mpl
# %matplotlib inline
plt.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (10,4)
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 56
pudl_settings = pudl.workspace.setup.get_defaults()
ferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
pudl_settings
# ## Pull `plants_steam_ferc1` and calculate some useful values
# First we pull the original (post-ETL) FERC 1 large plants data out of the PUDL database using an output object. The FERC Form 1 data only exists at annual resolution, so there's no inter-frequency aggregation to think about.
pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine)
plants_steam_ferc1 = (
pudl_out.plants_steam_ferc1().
assign(
water_limited_ratio=lambda x: x.water_limited_capacity_mw / x.capacity_mw,
not_water_limited_ratio=lambda x: x.not_water_limited_capacity_mw / x.capacity_mw,
peak_demand_ratio=lambda x: x.peak_demand_mw / x.capacity_mw,
capability_ratio=lambda x: x.plant_capability_mw / x.capacity_mw,
)
)
# # Validating Historical Distributions
# As a sanity check of the testing process itself, we can check to see whether the entire historical distribution has attributes that place it within the extremes of a historical subsampling of the distribution. In this case, we sample each historical year, and look at the range of values taken on by some quantile, and see whether the same quantile for the whole of the dataset fits within that range
pudl.validate.plot_vs_self(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_self)
# + [markdown] toc-hr-collapsed=false
# # Validation Against Fixed Bounds
# Some of the variables reported in this table have a fixed range of reasonable values, like the heat content per unit of a given fuel type. These varaibles can be tested for validity against external standards directly. In general we have two kinds of tests in this section:
# * **Tails:** are the exteme values too extreme? Typically, this is at the 5% and 95% level, but depending on the distribution, sometimes other thresholds are used.
# * **Middle:** Is the central value of the distribution where it should be?
# -
# ## Plant Capacities
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_capacity)
# ## CapEx & OpEx
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_expenses)
# ## Plant Capacity Ratios
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_capacity_ratios)
# ## Plant Connected Hours
# Currently expected to fail: ~10% of all plants have > 8760 hours.
pudl.validate.plot_vs_bounds(plants_steam_ferc1, pudl.validate.plants_steam_ferc1_connected_hours)
# # Validate an Individual Column
# If there's a particular column that is failing the validation, you can check several different validation cases with something like this cell:
testcol = "plant_hours_connected_while_generating"
self_tests = [x for x in pudl.validate.plants_steam_ferc1_self if x["data_col"] == testcol]
pudl.validate.plot_vs_self(plants_steam_ferc1, self_tests)
|
test/validate/notebooks/validate_plants_steam_ferc1.ipynb
|