markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
נגדיר רשימת גילים, ונבקש מ־filter לסנן אותם לפי הפונקציה שהגדרנו: | ages = [0, 1, 4, 10, 20, 35, 56, 84, 120]
mature_ages = filter(is_mature, ages)
print(tuple(mature_ages)) | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
כפי שלמדנו, filter מחזירה לנו רק גילים השווים ל־18 או גדולים ממנו. נחדד שהפונקציה שאנחנו מעבירים ל־filter לא חייבת להחזיר בהכרח True או False. הערך 0, לדוגמה, שקול ל־False, ולכן filter תסנן כל ערך שעבורו הפונקציה תחזיר 0: | to_sum = [(1, -1), (2, 5), (5, -3, -2), (1, 2, 3)]
sum_is_not_zero = filter(sum, to_sum)
print(tuple(sum_is_not_zero)) | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
בתא האחרון העברנו ל־filter את sum כפונקציה שאותה אנחנו רוצים להפעיל, ואת to_sum כאיברים שעליהם אנחנו רוצים לפעול. ה־tuple־ים שסכום איבריהם היה 0 סוננו, וקיבלנו חזרה iterator שהאיברים בו הם אך ורק אלו שסכומם שונה מ־0. כטריק אחרון, נלמד ש־filter יכולה לקבל גם None בתור הפרמטר הראשון, במקום פונקציה. זה יגרום ל־filter לא להפעיל פונקציה על האיברים שהועברו, כלומר לסנן אותם כמו שהם. איברים השקולים ל־True יוחזרו, ואיברים השקולים ל־False לא יוחזרו: | to_sum = [0, "", None, 0.0, True, False, "Hello"]
equivalent_to_true = filter(None, to_sum)
print(tuple(equivalent_to_true)) | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
כתבו פונקציה שמקבלת רשימת מחרוזות, ומחזירה רק את המחרוזות הפלינדרומיות שבה. מחרוזת נחשבת פלינדרום אם קריאתה מימין לשמאל ומשמאל לימין יוצרת אותו ביטוי. השתמשו ב־filter. חשוב! פתרו לפני שתמשיכו! פונקציות אנונימיות תעלול נוסף שנוסיף לארגז הכלים שלנו הוא פונקציות אנונימיות (anonymous functions). אל תיבהלו מהשם המאיים – בסך הכול פירושו הוא "פונקציות שאין להן שם". לפני שאתם מרימים גבה ושואלים את עצמכם למה הן שימושיות, בואו נבחן כמה דוגמאות. ניזכר בהגדרת פונקציית החיבור שיצרנו לא מזמן: | def add(num1, num2):
return num1 + num2 | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
ונגדיר את אותה הפונקציה בדיוק בצורה אנונימית: | add = lambda num1, num2: num1 + num2
print(add(5, 2)) | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
לפני שנסביר איפה החלק של ה"פונקציה בלי שם" נתמקד בצד ימין של ההשמה. כיצד מנוסחת הגדרת פונקציה אנונימית? הצהרנו שברצוננו ליצור פונקציה אנונימית בעזרת מילת המפתח lambda. מייד אחריה, ציינו את שמות כל הפרמטרים שהפונקציה תקבל, כשהם מופרדים בפסיק זה מזה. כדי להפריד בין רשימת הפרמטרים לערך ההחזרה של הפונקציה, השתמשנו בנקודתיים. אחרי הנקודתיים, כתבנו את הביטוי שאנחנו רוצים שהפונקציה תחזיר. חלקי ההגדרה של פונקציה אנונימית בעזרת מילת המפתח lambdaA girl has no name במה שונה ההגדרה של פונקציה זו מההגדרה של פונקציה רגילה? היא לא באמת שונה. המטרה היא לאפשר תחביר שיקל על חיינו כשאנחנו רוצים לכתוב פונקציה קצרצרה שאורכה שורה אחת. נראה, לדוגמה, שימוש ב־filter כדי לסנן את כל האיברים שאינם חיוביים: | def is_positive(number):
return number > 0
numbers = [-2, -1, 0, 1, 2]
positive_numbers = filter(is_positive, numbers)
print(tuple(positive_numbers)) | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
במקום להגדיר פונקציה חדשה שנקראת is_positive, נוכל להשתמש בפונקציה אנונימית: | numbers = [-2, -1, 0, 1, 2]
positive_numbers = filter(lambda n: n > 0, numbers)
print(tuple(positive_numbers)) | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
איך זה עובד? במקום להעביר ל־filter פונקציה שיצרנו מבעוד מועד, השתמשנו ב־lambda כדי ליצור פונקציה ממש באותה השורה. הפונקציה שהגדרנו מקבלת מספר (n), ומחזירה True אם הוא חיובי, או False אחרת. שימו לב שבצורה זו באמת לא היינו צריכים לתת שם לפונקציה שהגדרנו. השימוש בפונקציות אנונימיות לא מוגבל ל־map ול־filter, כמובן. מקובל להשתמש ב־lambda גם עבור פונקציות כמו sorted, שמקבלות פונקציה בתור ארגומנט. הפונקציה sorted מאפשרת לנו לסדר ערכים, ואפילו להגדיר עבורה לפי מה לסדר אותם. לרענון בנוגע לשימוש בפונקציה גשו למחברת בנושא פונקציות מובנות בשבוע 4. נסדר, למשל, את הדמויות ברשימה הבאה, לפי תאריך הולדתן: | closet = [
{'name': 'Peter', 'year_of_birth': 1927, 'gender': 'Male'},
{'name': 'Edmund', 'year_of_birth': 1930, 'gender': 'Male'},
{'name': 'Lucy', 'year_of_birth': 1932, 'gender': 'Female'},
{'name': 'Susan', 'year_of_birth': 1928, 'gender': 'Female'},
{'name': 'Jadis', 'year_of_birth': 0, 'gender': 'Female'},
] | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
נרצה שסידור הרשימה יתבצע לפי המפתח year_of_birth. כלומר, בהינתן מילון שמייצג דמות בשם d, יש להשיג את d['year_of_birth'], ולפיו לבצע את סידור הרשימה. ניגש למלאכה: | sorted(closet, key=lambda d: d['year_of_birth']) | _____no_output_____ | CC-BY-4.0 | week6/2_Functional_Behavior.ipynb | BrandMan2299/Notebooks |
Identificar Perfil de Consumo de Clientes de uma Instituição Financeira Uma breve introdução Uma Instituição Financeira X tem o interesse em identificar o perfil de gastos dos seus clientes. Identificando os clientes certos, eles podem melhorar a comunicação dos ativos promocionais, utilizar com mais eficiência os canais de comunicação e aumentar o engajamento dos clientes com o uso do seu produto. Sobre o estudo Os dados estão anonimizados por questões de segurança. No dataset, temos o valor gasto de 121818 clientes, no ano de 2019, em cada ramo de atividade. A base já está limpa. O intuito aqui é apresentar uma possibilidade de fazer a clusterização de clientes com base em seu consumo. Os dados são de 1 ano a fim de diminuir o efeito da sazonalidade. Importando a biblioteca Será necessário instalar o pacote kmodes, caso não tenha. | # pip install --upgrade kmodes
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import random
import plotly.express as px
# pacote do modelo
from kmodes.kmodes import KModes
# from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings("ignore")
random.seed(2020)
pd.options.display.float_format = '{:,.2f}'.format | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Leitura dos dados | # Carregar os dados
dados = pd.read_csv('../../dados_gasto_rmat.csv', sep=',')
dados.head(10)
print('O arquivo possui ' + str(dados.shape[0]) + ' linhas e ' + str(dados.shape[1]) + ' colunas.')
# Tipos dos dados
dados.dtypes
# Verificando valores nulos
dados.isnull().values.any() | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
A base não possui valores nulos. Visualização dos dados | # Total de clientes
len(dados['CLIENTE'].unique()) | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Soma do valor gasto em cada ramo de atividade. | # Dados agregados por RMAT e o percentual gasto
dados_agreg = dados.groupby(['RMAT'])['VALOR_GASTO'].sum().reset_index()
dados_agreg['percentual'] = round(dados_agreg['VALOR_GASTO']/sum(dados_agreg['VALOR_GASTO'])*100,2)
dados_agreg.head()
# Traz o 30 RMATs que tiveram maior gasto dos clientes
top_rotulos = dados_agreg.sort_values(by = 'percentual', ascending = False)[:30]
# Gráfico dos RMATs
ax = top_rotulos.plot.barh(x='RMAT', y='percentual', rot=0, figsize = (20, 15), fontsize=20, color='violet')
plt.title('Percentual do valor gasto por Ramo de Atividade', fontsize=22)
plt.xlabel('')
plt.ylabel('')
plt.show() | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Supermercados, postos de combustível e drogarias são os ramos com maior consumo dos clientes, representando 13.48%, 7.07% e 5.75%, respectivamente. Construção da base para o modelo Vamos construir uma base que seja a nível cliente e as colunas serão os percentuais de consumo em cada ramo de atividade. Aqui, temos muitos percentuais 0 em determinados ramos de atividade e isso tem impacto negativo na construção do modelo. Portanto, iremos categorizar as variáveis de acordo com a variação do percentual para cada atributo. | dados.head() | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
A função pivot faz a transposição dos dados, transformando o valor de "RMAT" em colunas. | # Inverter o data frame (colocar nos rmats como coluna)
cli_pivot = dados.pivot(index='CLIENTE', columns='RMAT', values='VALOR_GASTO')
cli_pivot.fillna(0, inplace = True)
# Calcular o percentual de volume transacionado de cada cliente por rmat
cli_pivot = cli_pivot.apply(lambda x: x.apply(lambda y: 100*y/sum(x)),axis = 1)
cli_pivot.head() | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Abaixo, uma função que quebra os valores de cada coluna em quantidade de categorias escolhido. Nesse caso, utilizamos 8 quebras. | # Funnção para fazer categorização das variaveis
def hcut(df, colunas, nlevs, prefixo=''):
x = df.copy()
for c in colunas:
x[prefixo+c] = pd.cut(x[c] , bins=nlevs, include_lowest = False, precision=0)
return x
base_cluster = hcut(cli_pivot, cli_pivot
.columns, 8, 'esc_')
base_cluster.head() | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Agora, filtraremos as variáveis necessárias para a modelagem. | # Selecionar somentes as colunas categorizadas que serão utilizadas no modelo
filter_col = [col for col in base_cluster if
col.startswith('esc_')]
df1 = base_cluster.loc[:,filter_col].reset_index()
df1.head() | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Criação do modelo Para nosso caso, utilizaremos o método de clusterização chamado K-modes. Esse método é uma extensão do K-means. Em vez de distâncias, ele usa dissimilaridade (isto é, quantificação das incompatibilidades totais entre dois objetos: quanto menor esse número, masi semelhantes são os dois objetos). Além disso, ele utiliza modas. Cada vetor de elementos é criado de forma a minimizar as diferenças entre o próprio vetor e cada objetos dos dados. Assim, teremos tantos vetores de modas quanto o número de clusters necessários, pois eles atuam como centróides. Aqui, iremos fazer a divisão dos clientes em 7 clusters. | km_huang = KModes(n_clusters=7, init = "Huang", n_init = 5, verbose=1, random_state=2020)
fitClusters = km_huang.fit_predict(df1)
# Adiciona os clusters na base
df1['cluster'] = fitClusters
base_cluster['cluster'] = fitClusters | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Percentual de clientes por cluster. | df1['cluster'].value_counts()/len(df1)*100
df2 = base_cluster.drop(columns=filter_col)
df2.head()
# Para visualização dos clusters
# from sklearn.decomposition import PCA
# pca_2 = PCA(2)
# plot_columns = pca_2.fit_transform(base_cluster.iloc[:,0:65])
# plt.scatter(x=plot_columns[:,0], y=plot_columns[:,1], c=fitClusters,)
# plt.show() | _____no_output_____ | MIT | Script/kmodes-cluster-perfil-consumo-cliente.ipynb | helenacypreste/cluster-cliente-perfil-consumo |
Iterators and Generators Homework Problem 1Create a generator that generates the squares of numbers up to some number N. | # option 1
def gensquares(N):
end = N
start = 0
while start<end:
yield start**2
start += 1
# option 2
def gensquares2(N):
for num in range(N):
yield num**2
for x in gensquares2(10):
print(x) | 0
1
4
9
16
25
36
49
64
81
| DOC | Code/9.generadores/02-Iterators and Generators Homework.ipynb | davidMartinVergues/PYTHON |
Problem 2Create a generator that yields "n" random numbers between a low and high number (that are inputs). Note: Use the random library. For example: | import random
random.randint(1,10)
def rand_num(low,high,n):
for times in range(n):
yield random.randint(low,high)
for num in rand_num(1,10,12):
print(num) | 1
5
4
6
9
5
5
4
2
7
9
6
| DOC | Code/9.generadores/02-Iterators and Generators Homework.ipynb | davidMartinVergues/PYTHON |
Problem 3Use the iter() function to convert the string below into an iterator: | s = 'hello'
#code here
s_iter = iter() | _____no_output_____ | DOC | Code/9.generadores/02-Iterators and Generators Homework.ipynb | davidMartinVergues/PYTHON |
Problem 4Explain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement. Extra Credit!Can you explain what *gencomp* is in the code below? (Note: We never covered this in lecture! You will have to do some Googling/Stack Overflowing!) | my_list = [1,2,3,4,5]
gencomp = (item for item in my_list if item > 3)
for item in gencomp:
print(item) | 4
5
| DOC | Code/9.generadores/02-Iterators and Generators Homework.ipynb | davidMartinVergues/PYTHON |
Copyright 2019 The TensorFlow Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | MIT | Improving_Computer_Vision_Using_CNN.ipynb | snalahi/Introduction-to-TensorFlow-for-Artificial-Intelligence-Machine-Learning-and-Deep-Learning |
Improving Computer Vision Accuracy using ConvolutionsIn the previous lessons you saw how to do fashion recognition using a Deep Neural Network (DNN) containing three layers -- the input layer (in the shape of the data), the output layer (in the shape of the desired output) and a hidden layer. You experimented with the impact of different sizes of hidden layer, number of training epochs etc on the final accuracy.For convenience, here's the entire code again. Run it and take a note of the test accuracy that is printed out at the end. | import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images / 255.0
test_images=test_images / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels) | Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
Epoch 1/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.4968 - accuracy: 0.8242
Epoch 2/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3721 - accuracy: 0.8649
Epoch 3/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3326 - accuracy: 0.8787
Epoch 4/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3107 - accuracy: 0.8857
Epoch 5/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2946 - accuracy: 0.8911
313/313 [==============================] - 1s 1ms/step - loss: 0.3476 - accuracy: 0.8750
| MIT | Improving_Computer_Vision_Using_CNN.ipynb | snalahi/Introduction-to-TensorFlow-for-Artificial-Intelligence-Machine-Learning-and-Deep-Learning |
Your accuracy is probably about 89% on training and 87% on validation...not bad...But how do you make that even better? One way is to use something called Convolutions. I'm not going to details on Convolutions here, but the ultimate concept is that they narrow down the content of the image to focus on specific, distinct, details. If you've ever done image processing using a filter (like this: https://en.wikipedia.org/wiki/Kernel_(image_processing)) then convolutions will look very familiar.In short, you take an array (usually 3x3 or 5x5) and pass it over the image. By changing the underlying pixels based on the formula within that matrix, you can do things like edge detection. So, for example, if you look at the above link, you'll see a 3x3 that is defined for edge detection where the middle cell is 8, and all of its neighbors are -1. In this case, for each pixel, you would multiply its value by 8, then subtract the value of each neighbor. Do this for every pixel, and you'll end up with a new image that has the edges enhanced.This is perfect for computer vision, because often it's features that can get highlighted like this that distinguish one item for another, and the amount of information needed is then much less...because you'll just train on the highlighted features.That's the concept of Convolutional Neural Networks. Add some layers to do convolution before you have the dense layers, and then the information going to the dense layers is more focussed, and possibly more accurate.Run the below code -- this is the same neural network as earlier, but this time with Convolutional layers added first. It will take longer, but look at the impact on the accuracy: | import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# Reshaping the images to tell the convolutional layers that the images are in greyscale by adding an extra dimension of 1
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
# Mind it, Convolutions and MaxPooling are always applied before the Deep Neural Network Layers
# Why 2D? because, applied convolutions and maxpoolings are 2D array in nature (having rows and columns)
# Here 64 is the total number of convolutional filters of size (3, 3) applied
### Be careful about the shapes!!! You obviously need to mention the input_shape at the first Conv2D(), otherwise it will turn
### into an error!!!
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Mind it, model.summary() is a way of cross-verification that the DNN with Convolutions is applied correctly with accurate
# shape retention
model.summary()
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
| 2.5.0
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 26, 26, 64) 640
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 13, 13, 64) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 11, 11, 64) 36928
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 1600) 0
_________________________________________________________________
dense_6 (Dense) (None, 128) 204928
_________________________________________________________________
dense_7 (Dense) (None, 10) 1290
=================================================================
Total params: 243,786
Trainable params: 243,786
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
1875/1875 [==============================] - 81s 43ms/step - loss: 0.4397 - accuracy: 0.8383
Epoch 2/5
1875/1875 [==============================] - 79s 42ms/step - loss: 0.2954 - accuracy: 0.8918
Epoch 3/5
1875/1875 [==============================] - 79s 42ms/step - loss: 0.2518 - accuracy: 0.9076
Epoch 4/5
1875/1875 [==============================] - 78s 42ms/step - loss: 0.2214 - accuracy: 0.9164
Epoch 5/5
1875/1875 [==============================] - 78s 42ms/step - loss: 0.1924 - accuracy: 0.9273
313/313 [==============================] - 4s 12ms/step - loss: 0.2676 - accuracy: 0.9057
| MIT | Improving_Computer_Vision_Using_CNN.ipynb | snalahi/Introduction-to-TensorFlow-for-Artificial-Intelligence-Machine-Learning-and-Deep-Learning |
It's likely gone up to about 93% on the training data and 91% on the validation data. That's significant, and a step in the right direction!Try running it for more epochs -- say about 20, and explore the results! But while the results might seem really good, the validation results may actually go down, due to something called 'overfitting' which will be discussed later. (In a nutshell, 'overfitting' occurs when the network learns the data from the training set really well, but it's too specialised to only that data, and as a result is less effective at seeing *other* data. For example, if all your life you only saw red shoes, then when you see a red shoe you would be very good at identifying it, but blue suade shoes might confuse you...and you know you should never mess with my blue suede shoes.)Then, look at the code again, and see, step by step how the Convolutions were built: Step 1 is to gather the data. You'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single 4D list that is 60,000x28x28x1, and the same for the test images. If you don't do this, you'll get an error when training as the Convolutions do not recognize the shape. ```import tensorflow as tfmnist = tf.keras.datasets.fashion_mnist(training_images, training_labels), (test_images, test_labels) = mnist.load_data()training_images=training_images.reshape(60000, 28, 28, 1)training_images=training_images / 255.0test_images = test_images.reshape(10000, 28, 28, 1)test_images=test_images/255.0``` Next is to define your model. Now instead of the input layer at the top, you're going to add a Convolution. The parameters are:1. The number of convolutions you want to generate. Purely arbitrary, but good to start with something in the order of 322. The size of the Convolution, in this case a 3x3 grid3. The activation function to use -- in this case we'll use relu, which you might recall is the equivalent of returning x when x>0, else returning 04. In the first layer, the shape of the input data.You'll follow the Convolution with a MaxPooling layer which is then designed to compress the image, while maintaining the content of the features that were highlighted by the convlution. By specifying (2,2) for the MaxPooling, the effect is to quarter the size of the image. Without going into too much detail here, the idea is that it creates a 2x2 array of pixels, and picks the biggest one, thus turning 4 pixels into 1. It repeats this across the image, and in so doing halves the number of horizontal, and halves the number of vertical pixels, effectively reducing the image by 25%.You can call model.summary() to see the size and shape of the network, and you'll notice that after every MaxPooling layer, the image size is reduced in this way. ```model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(2, 2),``` Add another convolution``` tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2)``` Now flatten the output. After this you'll just have the same DNN structure as the non convolutional version``` tf.keras.layers.Flatten(),``` The same 128 dense layers, and 10 output layers as in the pre-convolution example:``` tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax')])``` Now compile the model, call the fit method to do the training, and evaluate the loss and accuracy from the test set.```model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])model.fit(training_images, training_labels, epochs=5)test_loss, test_acc = model.evaluate(test_images, test_labels)print(test_acc)``` Visualizing the Convolutions and PoolingThis code will show us the convolutions graphically. The print (test_labels[;100]) shows us the first 100 labels in the test set, and you can see that the ones at index 0, index 23 and index 28 are all the same value (9). They're all shoes. Let's take a look at the result of running the convolution on each, and you'll begin to see common features between them emerge. Now, when the DNN is training on that data, it's working with a lot less, and it's perhaps finding a commonality between shoes based on this convolution/pooling combination. | print(test_labels[:100])
import matplotlib.pyplot as plt
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=0
SECOND_IMAGE=7
THIRD_IMAGE=26
CONVOLUTION_NUMBER = 1
from tensorflow.keras import models
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[0,x].grid(False)
f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[1,x].grid(False)
f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[2,x].grid(False) | _____no_output_____ | MIT | Improving_Computer_Vision_Using_CNN.ipynb | snalahi/Introduction-to-TensorFlow-for-Artificial-Intelligence-Machine-Learning-and-Deep-Learning |
EXERCISES1. Try editing the convolutions. Change the 32s to either 16 or 64. What impact will this have on accuracy and/or training time.2. Remove the final Convolution. What impact will this have on accuracy or training time?3. How about adding more Convolutions? What impact do you think this will have? Experiment with it.4. Remove all Convolutions but the first. What impact do you think this will have? Experiment with it. 5. In the previous lesson you implemented a callback to check on the loss function and to cancel training once it hit a certain amount. See if you can implement that here! | import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=10)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc) | 2.5.0
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
Epoch 1/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.1424 - accuracy: 0.9569
Epoch 2/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0502 - accuracy: 0.9842
Epoch 3/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0311 - accuracy: 0.9901
Epoch 4/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0217 - accuracy: 0.9932
Epoch 5/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0138 - accuracy: 0.9955
Epoch 6/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0100 - accuracy: 0.9969
Epoch 7/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0078 - accuracy: 0.9974
Epoch 8/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0063 - accuracy: 0.9980
Epoch 9/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0046 - accuracy: 0.9986
Epoch 10/10
1875/1875 [==============================] - 34s 18ms/step - loss: 0.0055 - accuracy: 0.9982
313/313 [==============================] - 2s 6ms/step - loss: 0.0584 - accuracy: 0.9866
0.9865999817848206
| MIT | Improving_Computer_Vision_Using_CNN.ipynb | snalahi/Introduction-to-TensorFlow-for-Artificial-Intelligence-Machine-Learning-and-Deep-Learning |
Fundamentals, introduction to machine learningThe purpose of these guides is to go a bit deeper into the details behind common machine learning methods, assuming little math background, and teach you how to use popular machine learning Python packages. In particular, we'll focus on the Numpy and PyTorch libraries.I'll assume you have some experience programming with Python -- if not, check out the initial [fundamentals of Python guide](https://github.com/ml4a/ml4a-guides/blob/master/notebooks/intro_python.ipynb) or for a longer, more comprehensive resource: [Learn Python the Hard Way](http://learnpythonthehardway.org/book/). It will really help to illustrate the concepts introduced here.Numpy underlies most Python machine learning packages and is great for performing quick sketches or working through calculations. PyTorch rivals alternative libraries, such as TensorFlow, for its flexibility and ease of use. Despite the high level appearance of PyTorch, it can be quite low-level, which is great for experimenting with novel algorithms. PyTorch can seamlessly be integrated with distributed computation libraries, like Ray, to make the Kessel Run in less than 12 parsecs (citation needed). These guides will present the formal math for concepts alongside Python code examples since this often (for me at least) is a lot easier to develop an intuition for. Each guide is also available as an iPython notebook for your own experimentation.The guides are not meant to exhaustively cover the field of machine learning but I hope they will instill you with the confidence and knowledge to explore further on your own.If you do want more details, you might enjoy my [artificial intelligence notes](http://frnsys.com/ai_notes). Modeling the worldYou've probably seen various machine learning algorithms pop up -- linear regression, SVMs, neural networks, random forests, etc. How are they all related? What do they have in common? What is machine learning for anyways?First, let's consider the general, fundamental problem all machine learning is concerned with, leaving aside the algorithm name soup for now. The primary concern of machine learning is _modeling the world_.We can model phenomena or systems -- both natural and artificial, if you want to make that distinction -- with mathematical functions. We see something out in the world and want to describe it in some way, we want to formalize how two or more things are related, and we can do that with a function. The problem is, for a given phenomenon, how do we figure out what function to use? There are infinitely many to choose from!Before this gets too abstract, let's use an example to make things more concrete.Say we have a bunch of data about the heights and weights of a species of deer. We want to understand how these two variables are related -- in particular, given the weight of a deer, can we predict its height?You might see where this is going. The data looks like a line, and lines in general are described by functions of the form $y = mx + b$.Remember that lines vary depending on what the values of $m$ and $b$ are:Thus $m$ and $b$ uniquely define a function -- thus they are called the _parameters_ of the function -- and when it comes to machine learning, these parameters are what we ultimately want to learn. So when I say there are infinitely many functions to choose from, it is because $m$ and $b$ can pretty much take on any value. Machine learning techniques essentially search through these possible functions to find parameters that best fit the data you have. One way machine learning algorithms are differentiated is by how exactly they conduct this search (i.e. how they learn parameters).In this case we've (reasonably) assumed the function takes the form $y = mx + b$, but conceivably you may have data that doesn't take the form of a line. Real world data is typically a lot more convoluted-looking. Maybe the true function has a $sin$ in it, for example.This is where another main distinction between machine learning algorithms comes in -- certain algorithms can model only certain forms of functions. _Linear regression_, for example, can only model linear functions, as indicated by its name. Neural networks, on the other hand, are _universal function approximators_, which mean they can (in theory) approximate _any_ function, no matter how exotic. This doesn't necessarily make them a better method, just better suited for certain circumstances (there are many other considerations when choosing an algorithm).For now, let's return to the line function. Now that we've looked at the $m$ and $b$ variables, let's consider the input variable $x$. A function takes a numerical input; that is $x$ must be a number of some kind. That's pretty straightforward here since the deer weights are already numbers. But this is not always the case! What if we want to predict the sales price of a house. A house is not a number. We have to find a way to _represent_ it as a number (or as several numbers, i.e. a vector, which will be detailed in a moment), e.g. by its square footage. This challenge of representation is a major part of machine learning; the practice of building representations is known as _feature engineering_ since each variable (e.g. square footage or zip code) used for the representation is called a _feature_.If you think about it, representation is a practice we regularly engage in. The word "house" is not a house any more than an image of a house is -- there is no true "house" anyways, it is always a constellation of various physical and nonphysical components.That's about it -- broadly speaking, machine learning is basically a bunch of algorithms that learn you a function, which is to say they learn the parameters that uniquely define a function. VectorsIn the line example before I mentioned that we might have multiple numbers representing an input. For example, a house probably can't be solely represented by its square footage -- perhaps we also want to consider how many bedrooms it has, or how high the ceilings are, or its distance from local transportation. How do we group these numbers together?That's what _vectors_ are for (they come up for many other reasons too, but we'll focus on representation for now). Vectors, along with matrices and other tensors (which will be explained a bit further down), could be considered the "primitives" of machine learning.The Numpy library is best for dealing with vectors (and other tensors) in Python. A more complete introduction to Numpy is provided in the [numpy and basic mathematics guide](https://github.com/ml4a/ml4a-guides/blob/master/notebooks/math_review_numpy.ipynb).Let's import `numpy` with the alias `nf`: | import numpy as np | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
You may have encountered vectors before in high school or college -- to use Python terms, a vector is like a list of numbers. The mathematical notation is quite similar to Python code, e.g. `[5,4]`, but `numpy` has its own way of instantiating a vector: | v = np.array([5, 4]) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$v = \begin{bmatrix} 5 \\ 4 \end{bmatrix}$$Vectors are usually represented with lowercase variables.Note that we never specified how _many_ numbers (also called _components_) a vector has - because it can have any amount. The amount of components a vector has is called its _dimensionality_. The example vector above has two dimensions. The vector `x = [8,1,3]` has three dimensions, and so on. Components are usually indicated by their index (usually using 1-indexing), e.g. in the previous vector, $x_1$ refers to the value $8$."Dimensions" in the context of vectors is just like the spatial dimensions you spend every day in. These dimensions define a __space__, so a two-dimensional vector, e.g. `[5,4]`, can describe a point in 2D space and a three-dimensional vector, e.g. `[8,1,3]`, can describe a point in 3D space. As mentioned before, there is no limit to the amount of dimensions a vector may have (technically, there must be one or more dimensions), so we could conceivably have space consisting of thousands or tens of thousands of dimensions. At that point we can't rely on the same human intuitions about space as we could when working with just two or three dimensions. In practice, most interesting applications of machine learning deal with many, many dimensions.We can get a better sense of this by plotting a vector out. For instance, a 2D vector `[5,0]` would look like:So in a sense vectors can be thought of lines that "point" to the position they specify - here the vector is a line "pointing" to `[5,0]`. If the vector were 3D, e.g. `[8,1,3]`, then we would have to visualize it in 3D space, and so on.So vectors are great - they allow us to form logical groupings of numbers. For instance, if we're talking about cities on a map we would want to group their latitude and longitude together. We'd represent Lagos with `[6.455027, 3.384082]` and Beijing separately with `[39.9042, 116.4074]`. If we have an inventory of books for sale, we could represent each book with its own vector consisting of its price, number of pages, and remaining stock.To use vectors in functions, there are a few mathematical operations you need to know. Basic vector operationsVectors can be added (and subtracted) easily: | np.array([6, 2]) + np.array([-4, 4]) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\begin{bmatrix} 6 \\ 2 \end{bmatrix} + \begin{bmatrix} -4 \\ 4 \end{bmatrix} = \begin{bmatrix} 6 + -4 \\ 2 + 4 \end{bmatrix} = \begin{bmatrix} 2 \\ 6 \end{bmatrix}$$However, when it comes to vector multiplication there are many different kinds.The simplest is _vector-scalar_ multiplication: | 3 * np.array([2, 1]) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$3\begin{bmatrix} 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \times 2 \\ 3 \times 1\end{bmatrix} = \begin{bmatrix} 6 \\ 3 \end{bmatrix}$$But when you multiply two vectors together you have a few options. I'll cover the two most important ones here.The one you might have thought of is the _element-wise product_, also called the _pointwise product_, _component-wise product_, or the _Hadamard product_, typically notated with $\odot$. This just involves multiplying the corresponding elements of each vector together, resulting in another vector: | np.array([6, 2]) * np.array([-4, 4]) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\begin{bmatrix} 6 \\ 2 \end{bmatrix} \odot \begin{bmatrix} -4 \\ 4 \end{bmatrix} = \begin{bmatrix} 6 \times -4 \\ 2 \times 4 \end{bmatrix} = \begin{bmatrix} -24 \\ 8 \end{bmatrix}$$The other vector product, which you'll encounter a lot, is the _dot product_, also called _inner product_, usually notated with $\cdot$ (though when vectors are placed side-by-side this often implies dot multiplication). This involves multiplying corresponding elements of each vector and then summing the resulting vector's components (so this results in a scalar rather than another vector). | np.dot(np.array([6, 2]), np.array([-4, 4])) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\begin{bmatrix} 6 \\ 2 \end{bmatrix} \cdot \begin{bmatrix} -4 \\ 4 \end{bmatrix} = (6 \times -4) + (2 \times 4) = -16$$The more general formulation is: | # a slow pure-Python dot product
def dot(a, b):
assert len(a) == len(b)
return sum(a_i * b_i for a_i, b_i in zip(a,b)) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\begin{aligned}\vec{a} \cdot \vec{b} &= \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{bmatrix} \cdot \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix} = a_1b_1 + a_2b_2 + \dots + a_nb_n \\&= \sum^n_{i=1} a_i b_i\end{aligned}$$Note that the vectors in these operations must have the same dimensions!Perhaps the most important vector operation mentioned here is the dot product. We'll return to the house example to see why. Let's say want to represent a house with three variables: square footage, number of bedrooms, and the number of bathrooms. For convenience we'll notate the variables $x_1, x_2, x_3$, respectively. We're working in three dimensions now so instead of learning a line we're learning a _hyperplane_ (if we were working with two dimensions we'd be learning a plane, "hyperplane" is the term for the equivalent of a plane in higher dimensions).Aside from the different name, the function we're learning is essentially of the same form as before, just with more variables and thus more parameters. We'll notate each parameter as $\theta_i$ as is the convention (you may see $\beta_i$ used elsewhere), and for the intercept (what was the $b$ term in the original line), we'll add in a dummy variable $x_0 = 1$ as is the typical practice (thus $\theta_0$ is equivalent to $b$): | # this is so clumsy in python;
# this will become more concise in a bit
def f(x0, x1, x2, x3, theta0, theta1, theta2, theta3):
return theta0 * x0\
+ theta1 * x1\
+ theta2 * x2\
+ theta3 * x3 | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$y = \theta_0 x_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_3$$This kind of looks like the dot product, doesn't it? In fact, we can re-write this entire function as a dot product. We define our feature vector $x = [x_0, x_1, x_2, x_3]$ and our parameter vector $\theta = [\theta_0, \theta_1, \theta_2, \theta_3]$, then re-write the function: | def f(x, theta):
return x.dot(theta) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$y = \theta x$$So that's how we incorporate multiple features in a representation.There's a whole lot more to vectors than what's presented here, but this is the ground-level knowledge you should have of them. Other aspects of vectors will be explained as they come up. LearningSo machine learning algorithms learn parameters - how do they do it?Here we're focusing on the most common kind of machine learning - _supervised_ learning. In supervised learning, the algorithm learns parameters from data which includes both the inputs and the true outputs. This data is called _training_ data.Although they vary on specifics, there is a general approach that supervised machine learning algorithms use to learn parameters. The idea is that the algorithm takes an input example, inputs it into the current guess at the function (called the _hypothesis_, notate $h_{\theta}$), and then checks how wrong its output is against the true output. The algorithm then updates its hypothesis (that is, its guesses for the parameters), accordingly."How wrong" an algorithm is, can vary depending on the _loss function_ it is using. The loss function takes the algorithm's current guess for the output, $\hat y$, and the true output, $y$, and returns some value quantifying its wrongness. Certain loss functions are more appropriate for certain tasks, which we'll get into later.We'll get into the specifies of how the algorithm determines what kind of update to perform (i.e. how much each parameter changes), but before we do that we should consider how we manage batches of training examples (i.e. multiple training vectors) simultaneously. Matrices__Matrices__ are in a sense a "vector" of vectors. That is, where a vector can be thought of as a logical grouping of numbers, a matrix can be thought of as a logical grouping of vectors. So if a vector represents a book in our catalog (id, price, number in stock), a matrix could represent the entire catalog (each row refers to a book). Or if we want to represent a grayscale image, the matrix can represent the brightness values of the pixels in the image. | A = np.array([
[6, 8, 0],
[8, 2, 7],
[3, 3, 9],
[3, 8, 6]
]) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\mathbf A =\begin{bmatrix}6 & 8 & 0 \\8 & 2 & 7 \\3 & 3 & 9 \\3 & 8 & 6\end{bmatrix}$$Matrices are usually represented with uppercase variables.Note that the "vectors" in the matrix must have the same dimension. The matrix's dimensions are expressed in the form $m \times n$, meaning that there are $m$ rows and $n$ columns. So the example matrix has dimensions of $4 \times 3$. Numpy calls these dimensions a matrix's "shape".We can access a particular element, $A_{i,j}$, in a matrix by its indices. Say we want to refer to the element in the 2nd row and the 3rd column (remember that python uses 0-indexing): | A[1,2] | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
Basic matrix operationsLike vectors, matrix addition and subtraction is straightforward (again, they must be of the same dimensions): | B = np.array([
[8, 3, 7],
[2, 9, 6],
[2, 5, 6],
[5, 0, 6]
])
A + B | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\begin{aligned}\mathbf B &=\begin{bmatrix}8 & 3 & 7 \\2 & 9 & 6 \\2 & 5 & 6 \\5 & 0 & 6\end{bmatrix} \\A + B &=\begin{bmatrix}8+6 & 3+8 & 7+0 \\2+8 & 9+2 & 6+7 \\2+3 & 5+3 & 6+9 \\5+3 & 0+8 & 6+6\end{bmatrix} \\&=\begin{bmatrix}14 & 11 & 7 \\10 & 11 & 13 \\5 & 8 & 15 \\8 & 8 & 12\end{bmatrix} \\\end{aligned}$$Matrices also have a few different multiplication operations, like vectors._Matrix-scalar multiplication_ is similar to vector-scalar multiplication - you just distribute the scalar, multiplying it with each element in the matrix._Matrix-vector products_ require that the vector has the same dimension as the matrix has columns, i.e. for an $m \times n$ matrix, the vector must be $n$-dimensional. The operation basically involves taking the dot product of each matrix row with the vector: | # a slow pure-Python matrix-vector product,
# using our previous dot product implementation
def matrix_vector_product(M, v):
return [np.dot(row, v) for row in M]
# or, with numpy, you could use np.matmul(A,v) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\mathbf M v =\begin{bmatrix}M_{1} \cdot v \\\vdots \\M_{m} \cdot v \\\end{bmatrix}$$We have a few options when it comes to multiplying matrices with matrices.However, before we go any further we should talk about the _tranpose_ operation - this just involves switching the columns and rows of a matrix. The transpose of a matrix $A$ is notated $A^T$: | A = np.array([
[1,2,3],
[4,5,6]
])
np.transpose(A) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\begin{aligned}\mathbf A &=\begin{bmatrix}1 & 2 & 3 \\4 & 5 & 6\end{bmatrix} \\\mathbf A^T &=\begin{bmatrix}1 & 4 \\2 & 5 \\3 & 6\end{bmatrix}\end{aligned}$$For matrix-matrix products, the matrix on the lefthand must have the same number of columns as the righthand's rows. To be more concrete, we'll represent a matrix-matrix product as $A B$ and we'll say that $A$ has $m \times n$ dimensions. For this operation to work, $B$ must have $n \times p$ dimensions. The resulting product will have $m \times p$ dimensions. | # a slow pure-Python matrix Hadamard product
def matrix_matrix_product(A, B):
_, a_cols = np.shape(A)
b_rows, _ = np.shape(B)
assert a_cols == b_rows
result = []
# tranpose B so we can iterate over its columns
for col in np.tranpose(B):
# using our previous implementation
result.append(
matrix_vector_product(A, col))
return np.transpose(result) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\mathbf AB =\begin{bmatrix}A B^T_1 \\\vdots \\A B^T_p\end{bmatrix}^T$$Finally, like with vectors, we also have Hadamard (element-wise) products: | # a slow pure-Python matrix-matrix product
# or, with numpy, you can use A * B
def matrix_matrix_hadamard(A, B):
result = []
for a_row, b_row in zip(A, B):
result.append(
zip(a_i * b_i for a_i, b_i in zip(a_row, b_row))) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\mathbf A \odot B =\begin{bmatrix}A_{1,1} B_{1,1} & \dots & A_{1,n} B_{1,n} \\\vdots & \dots & \vdots \\A_{m,1} B_{m,1} & \dots & A_{m,n} B_{m,n}\end{bmatrix}$$Like vector Hadamard products, this requires that the two matrices share the same dimensions. TensorsWe've seen vectors, which is like a list of numbers, and matrices, which is like a list of a list of numbers. We can generalize this concept even further, for instance, with a list of a list of a list of numbers and so on. What all of these structures are called are _tensors_ (i.e. the "tensor" in "TensorFlow"). They are distinguished by their _rank_, which, if you're thinking in the "list of lists" way, refers to the number of nestings. So a vector has a rank of one (just a list of numbers) and a matrix has a rank of two (a list of a list of numbers).Another way to think of rank is by number of indices necessary to access an element in the tensor. An element in a vector is accessed by one index, e.g. `v[i]`, so it is of rank one. An element in a matrix is accessed by two indices, e.g. `M[i,j]`, so it is of rank two.Why is the concept of a tensor useful? Before we referred to vectors as a logical grouping of numbers and matrices as a logical grouping of vectors. What if we need a logical grouping of matrices? That's what 3rd-rank tensors are! A matrix can represent a grayscale image, but what about a color image with three color channels (red, green, blue)? With a 3rd-rank tensor, we could represent each channel as its own matrix and group them together. Learning continuedWhen the current hypothesis is wrong, how does the algorithm know how to adjust the parameters?Let's take a step back and look at it another way. The loss function measures the wrongness of the hypothesis $h_{\theta}$ - another way of saying this is the loss function is a function of the parameters $\theta$. So we could notate it as $L(\theta)$.The minimum of $L(\theta)$ is the point where the parameters guess $\theta$ is least wrong (at best, $L(\theta) = 0$, i.e. a perfect score, though this is not always good, as will be explained later); i.e. the best guess for the parameters.So the algorithm learns the best-fitting function by minimizing its loss function. That is, we can frame this as an optimization problem.There are many techniques to solve an optimization problem - sometimes they can be solved analytically (i.e. by moving around variables and isolating the one you want to solve for), but more often than not we must solve them numerically, i.e. by guessing a lot of different values - but not randomly!The prevailing technique now is called _gradient descent_, and to understand how it works, we have to understand derivatives. DerivativesDerivatives are everywhere in machine learning, so it's worthwhile become a bit familiar with them. I won't go into specifics on differentiation (how to calculate derivatives) because now we're spoiled with automatic differentiation, but it's still good to have a solid intuition about derivatives themselves.A derivative expresses a rate of (instantaneous) change - they are always about how one variable quantity changes with respect to another variable quantity. That's basically all there is to it. For instance, velocity is a derivative which expresses how position changes with respect to time. Another interpretation, which is more relevant to machine learning, is that a derivative tells us how to change one variable to achieve a desired change in the other variable. Velocity, for instance, tells us how to change position by "changing" time.To get a better understanding of _instantaneous_ change, consider a cyclist, cycling on a line. We have data about their position over time. We could calculate an average velocity over the data's entire time period, but we typically prefer to know the velocity at any given _moment_ (i.e. at any _instant_).Let's get more concrete first. Let's say we have data for $n$ seconds, i.e. from $t_0$ to $t_n$ seconds, and the position at any given second $i$ is $p_i$. If we wanted to get the rate of change in position over the entire time interval, we'd just do: | positions = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, # moving forward
9, 9, 9, 9, 9, 9, 9, 9, 9, 9, # pausing
9, 8, 7, 6, 5, 4, 3, 2, 1, 0] # moving backwards
t_0 = 0
t_n = 29
(positions[t_n] - positions[t_0])/t_n | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$v = \frac{p_n - p_0}{n}$$This kind of makes it look like the cyclist didn't move at all. It would probably be more useful to identify the velocity at a given second $t$. Thus we want to come up with some function $v(t)$ which gives us the velocity at some second $t$. We can apply the same approach we just used to get the velocity over the entire time interval, but we focus on a shorter time interval instead. To get the _instantaneous_ change at $t$ we just keep reducing the interval we look at until it is basically 0.Derivatives have a special notation. A derivative of a function $f(x)$ with respect to a variable $x$ is notated:$$\frac{\delta f(x)}{\delta x}$$So if position is a function of time, e.g. $p = f(t)$, then velocity can be represented as $\frac{\delta p}{\delta t}$. To drive the point home, this derivative is also a function of time (derivatives are functions of what their "with respect to" variable is).Since we are often computing derivatives of a function with respect to its input, a shorthand for the derivative of a function $f(x)$ with respect to $x$ can also be notated $f'(x)$. The Chain RuleA very important property of derivatives is the _chain rule_ (there are other "chain rules" throughout mathematics, if we want to be specific, this is the "chain rule of derivatives"). The chain rule is important because it allows us to take complicated nested functions and more manageably differentiate them.Let's look at an example to make this concrete: | def g(x):
return x**2
def h(x):
return x**3
def f(x):
return g(h(x))
# derivatives
def g_(x):
return 2*x
def h_(x):
return 3*(x**2) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
$$\begin{aligned}g(x) &= x^2 \\h(x) &= x^3 \\f(x) &= g(h(x)) \\g'(x) &= 2x \\h'(x) &= 3x^2\end{aligned}$$We're interested in understanding how $f(x)$ changes with respect to $x$, so we want to compute the derivative of $f(x)$. The chain rule allows us to individually differentiate the component functions of $f(x)$ and multiply those to get $f'(x)$: | def f_(x):
return g_(x) * h_(x) | _____no_output_____ | MIT | examples/fundamentals/fundamentals.ipynb | yo252yo/ml4a |
Check EnvironmentThis notebook checks that you have correctly created the environment and that all packages needed are installed. EnvironmentThe next command should return a line like (Mac/Linux): //anaconda/envs/ztdl/bin/pythonor like (Windows 10): C:\\\\Anaconda3\\envs\\ztdl\\python.exeIn particular you should make sure that you are using the python executable from within the course environment.If that's not the case do this:1. close this notebook2. go to the terminal and stop jupyer notebook3. make sure that you have activated the environment, you should see a prompt like: (ztdl) $4. (optional) if you don't see that prompt activate the environment: - mac/linux: conda activate ztdl - windows: activate ztdl5. restart jupyter notebook | import os
import sys
sys.executable | _____no_output_____ | MIT | course/0_Check_Environment.ipynb | tetsu/zero-deep-learning |
Python 3.6The next line should say that you're using Python 3.6.x from Continuum Analytics. At the time of publication it looks like this (Mac/Linux): Python 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 14:01:38) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin Type "help", "copyright", "credits" or "license" for more information.or like this (Windows 10): Python 3.6.7 |Anaconda, Inc.| (default, Oct 28 2018, 19:44:12) [MSC v.1915 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.but date and exact version of GCC may change in the future.If you see a different version of python, go back to the previous step and make sure you created and activated the environment correctly. | import sys
sys.version | _____no_output_____ | MIT | course/0_Check_Environment.ipynb | tetsu/zero-deep-learning |
JupyterCheck that Jupyter is running from within the environment. The next line should look like (Mac/Linux): //anaconda/envs/ztdl/lib/python3.6/site-packages/jupyter.py'or like this (Windows 10): C:\\Users\\paperspace\\Anaconda3\\envs\\ztdl\\lib\\site-packages\\jupyter.py | import jupyter
jupyter.__file__ | _____no_output_____ | MIT | course/0_Check_Environment.ipynb | tetsu/zero-deep-learning |
Other packagesHere we will check that all the packages are installed and have the correct versions. If everything is ok you should see: Using TensorFlow backend. Houston we are go!If there's any issue here please make sure you have checked the previous steps and if it's all good please send us a question in the Q&A forum. | import pip
import numpy
import jupyter
import matplotlib
import sklearn
import scipy
import pandas
import PIL
import seaborn
import h5py
import tensorflow
import keras
def check_version(pkg, version):
actual = pkg.__version__.split('.')
if len(actual) == 3:
actual_major = '.'.join(actual[:2])
elif len(actual) == 2:
actual_major = '.'.join(actual)
else:
raise NotImplementedError(pkg.__name__ +
"actual version :"+
pkg.__version__)
try:
assert(actual_major == version)
except Exception as ex:
print("{} {}\t=> {}".format(pkg.__name__,
version,
pkg.__version__))
raise ex
check_version(pip, '10.0')
check_version(numpy, '1.15')
check_version(matplotlib, '3.0')
check_version(sklearn, '0.20')
check_version(scipy, '1.1')
check_version(pandas, '0.23')
check_version(PIL, '5.3')
check_version(seaborn, '0.9')
check_version(h5py, '2.8')
check_version(tensorflow, '1.11')
check_version(keras, '2.2')
print("Houston we are go!") | Houston we are go!
| MIT | course/0_Check_Environment.ipynb | tetsu/zero-deep-learning |
Preprocessing | # Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df = application_df.drop(['EIN', 'NAME'], axis=1)
# Determine the number of unique values in each column.
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
val_count = application_df['APPLICATION_TYPE'].value_counts()
val_count
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = list(val_count[val_count<200].index)
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
# Look at CLASSIFICATION value counts for binning
class_val_count = application_df['CLASSIFICATION'].value_counts()
class_val_count
# You may find it helpful to look at CLASSIFICATION value counts >1
class_val_count2 = class_val_count[class_val_count>1]
class_val_count2
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
classifications_to_replace =class_val_count.loc[class_val_count<1000].index
# Replace in dataframe
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
# Convert categorical data to numeric with `pd.get_dummies`
dummies_df = pd.get_dummies(application_df)
dummies_df
# Split our preprocessed data into our features and target arrays
X = dummies_df.drop(['IS_SUCCESSFUL'],1).values
y = dummies_df['IS_SUCCESSFUL'].values
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X,y,random_state= 42)
print(X.shape)
print(y.shape)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test) | _____no_output_____ | ADSL | Alphabet_Soup_Charity_Final.ipynb | serastva/Deep_learning_challenge |
Compile, Train and Evaluate the Model | # Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
input_features = len(X_train_scaled[0])
hidden_nodes_layer1 = 80
hidden_nodes_layer2 = 30
#hidden_nodes_layer3 = 60
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=input_features, activation='relu'))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation='relu'))
# Third hidden layer
#nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation='relu'))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn.fit(X_train_scaled, y_train, epochs=100)
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn.save("AlphabetSoupCharity.h5") | _____no_output_____ | ADSL | Alphabet_Soup_Charity_Final.ipynb | serastva/Deep_learning_challenge |
**Optimization 1** | # Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
input_features = len(X_train_scaled[0])
hidden_nodes_layer1 = 43
hidden_nodes_layer2 = 25
#hidden_nodes_layer3 = 15
nn2 = tf.keras.models.Sequential()
# First hidden layer
nn2.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=input_features, activation='relu'))
# Second hidden layer
nn2.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation='relu'))
# Third hidden layer
#nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation='relu'))
# Output layer
nn2.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn2.summary()
# Compile the model
nn2.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn2.fit(X_train_scaled, y_train, epochs=200)
# Evaluate the model using the test data
model_loss, model_accuracy = nn2.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn2.save("AlphabetSoupCharity_optimize_1.h5") | _____no_output_____ | ADSL | Alphabet_Soup_Charity_Final.ipynb | serastva/Deep_learning_challenge |
**Optimization 2** | # Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
input_features = len(X_train_scaled[0])
hidden_nodes_layer1 = 20
hidden_nodes_layer2 = 40
hidden_nodes_layer3 = 80
nn3 = tf.keras.models.Sequential()
# First hidden layer
nn3.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=input_features, activation='relu'))
# Second hidden layer
nn3.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation='relu'))
# Third hidden layer
nn3.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation='relu'))
# Output layer
nn3.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn3.summary()
# Compile the model
nn3.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn3.fit(X_train_scaled, y_train, epochs=200)
# Evaluate the model using the test data
model_loss, model_accuracy = nn3.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn3.save("AlphabetSoupCharity_optimize_2.h5")
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
input_features = len(X_train_scaled[0])
hidden_nodes_layer1 = 60
hidden_nodes_layer2 = 60
hidden_nodes_layer3 = 40
hidden_nodes_layer4 = 20
nn4 = tf.keras.models.Sequential()
# First hidden layer
nn4.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=input_features, activation='relu'))
# Second hidden layer
nn4.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation='relu'))
# Third hidden layer
nn4.add(tf.keras.layers.Dense(units=hidden_nodes_layer3, activation='relu'))
# Forth hidden layer
nn4.add(tf.keras.layers.Dense(units=hidden_nodes_layer4, activation='relu'))
# Output layer
nn4.add(tf.keras.layers.Dense(units=1, activation='sigmoid'))
# Check the structure of the model
nn4.summary()
# Compile the model
nn4.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn4.fit(X_train_scaled, y_train, epochs=200)
# Evaluate the model using the test data
model_loss, model_accuracy = nn4.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn4.save("AlphabetSoupCharity_optimize_3.h5") | _____no_output_____ | ADSL | Alphabet_Soup_Charity_Final.ipynb | serastva/Deep_learning_challenge |
Numpy We have seen python basic data structures in our last section. They are great but lack specialized features for data analysis. Like, adding rows, columns, operating on 2d matrices aren't readily available. So, we will use *numpy* for such functions. | import numpy as np | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl |
Numpy operates on *nd* arrays. These are similar to lists but contains homogenous elements but easier to store 2-d data. | l1 = [1,2,3,4]
nd1 = np.array(l1)
print(nd1)
l2 = [5,6,7,8]
nd2 = np.array([l1,l2])
print(nd2) | [1 2 3 4]
[[1 2 3 4]
[5 6 7 8]]
| MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl |
Sum functions on np.array() | print(nd2.shape)
print(nd2.size)
print(nd2.dtype) | (2, 4)
8
int64
| MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl |
Question 1Create an identity 2d-array or matrix (with ones across the diagonal).[ **Hint:** You can also use **np.identity()** function ] | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Question 2Create a 2d-array or matrix of order 3x3 with values = 9,8,7,6,5,4,3,2,1 arranged in the same order.Use: **np.array()** function | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Question 3Interchange both the rows and columns of the given matrix.Hint: You can use the transpose **.T**) | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Question 4Add + 1 to all the elements in the given matrix. | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Similarly you can do operations like scalar substraction, division, multiplication (operating on each element in the matrix) Question 5Find the mean of all elements in the given matrix nd6.nd6 = [[ 1 4 9 121 144 169] [ 16 25 36 196 225 256] [ 49 64 81 289 324 361]] Use: **.mean()** function | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Question 7Find the dot product of two given matrices.[**Hint:** Use **np.dot()**] | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Array Slicing/Indexing:- Now we'll learn to access multiple elements or a range of elements from an array. | x = np.arange(20)
x | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl |
Question 8Print the array elements from start to 4th position | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Question 9- Return elements from first position with step size 2. (Difference between consecutive elements is 2) | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
Question 10Reverse the array using array indexing: | _____no_output_____ | MIT | notebooks/PE_Numpy.ipynb | NickBaynham/aimldl | |
The Graph Data AccessIn this notebook, we read in the data that was generated and saved as a csv from the [TheGraphDataSetCreation](TheGraphDataSetCreation.ipynb) notebook. Goals of this notebook are to obtain:* Signals, states, event and sequences* Volatility metrics* ID perceived shocks (correlated with announcements)* Signal for target price* Signal for market price* Error plotAs a starting point for moving to a decision support system. | # import libraries
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy as sp
from statsmodels.distributions.empirical_distribution import ECDF
import scipy.stats as stats | _____no_output_____ | MIT | notebooks/analysis/TheGraphDataAnalysis.ipynb | trangnv/geb-simulations-h20 |
Import data and add additional attributes | graphData = pd.read_csv('saved_results/RaiLiveGraphData.csv')
del graphData['Unnamed: 0']
graphData.head()
graphData.describe()
graphData.plot(x='blockNumber',y='redemptionPriceActual',kind='line',title='redemptionPriceActual')
graphData.plot(x='blockNumber',y='redemptionRateActual',kind='line',title='redemptionRateActual')
graphData['error'] = graphData['redemptionPriceActual'] - graphData['marketPriceUsd']
graphData['error_integral'] = graphData['error'].cumsum()
graphData.plot(x='blockNumber',y='error',kind='line',title='error')
graphData.plot(x='blockNumber',y='error_integral',kind='line',title='Steady state error') | _____no_output_____ | MIT | notebooks/analysis/TheGraphDataAnalysis.ipynb | trangnv/geb-simulations-h20 |
Error experimentation Note: not taking into account control period | kp = 2e-7
#ki = (-kp * error)/(integral_error)
# computing at each time, what would the value of ki need to be such that the redemption price would be constant
graphData['equilibriation_ki'] = (-kp * graphData.error)/graphData.error_integral
# todo iterate through labels and append negative
graphData['equilibriation_ki'].apply(lambda x: -x).plot(logy = True,title='Actual equilibriation_ki - flipped sign for log plotting')
plt.hlines(5e-9, 0, 450, linestyles='solid', label='Recommended ki - flipped sign', color='r')
plt.hlines(-(graphData['equilibriation_ki'].median()), 0, 450, linestyles='solid', label='median actual ki - flipped', color='g')
locs,labels = plt.yticks() # Get the current locations and labelsyticks
new_locs = []
for i in locs:
new_locs.append('-'+str(i))
plt.yticks(locs, new_locs)
plt.legend(loc="upper right")
graphData['equilibriation_ki'].median() | _____no_output_____ | MIT | notebooks/analysis/TheGraphDataAnalysis.ipynb | trangnv/geb-simulations-h20 |
Counterfactual if intergral control rate had been median the whole time | graphData['counterfactual_redemption_rate'] = (kp * graphData['error'] + graphData['equilibriation_ki'].median())/ graphData['error_integral']
subsetGraph = graphData.iloc[50:]
sns.lineplot(data=subsetGraph,x="blockNumber", y="counterfactual_redemption_rate",label='Counterfactual')
ax2 = plt.twinx()
# let reflexer know this is wrong
sns.lineplot(data=subsetGraph,x="blockNumber", y="redemptionRateActual",ax=ax2,color='r',label='Actual')
plt.title('Actual redemption rate vs counterfactual')
plt.legend(loc="upper left")
| _____no_output_____ | MIT | notebooks/analysis/TheGraphDataAnalysis.ipynb | trangnv/geb-simulations-h20 |
Goodness of fit testsWhether or not counterfactual is far enough from actual to reject null that they are from the same distributions. | # fit a cdf
ecdf = ECDF(subsetGraph.redemptionRateActual.values)
ecdf2 = ECDF(subsetGraph.counterfactual_redemption_rate.values)
plt.plot(ecdf.x,ecdf.y,color='r')
plt.title('redemptionRateActual ECDF')
plt.show()
plt.plot(ecdf2.x,ecdf2.y,color='b')
plt.title('counterfactual_redemption_rate ECDF')
plt.show()
alpha = 0.05
statistic, p_value = stats.ks_2samp(subsetGraph.redemptionRateActual.values, subsetGraph.counterfactual_redemption_rate.values) # two sided
if p_value > alpha:
decision = "Sample is from the distribution"
elif p_value <= alpha:
decision = "Sample is not from the distribution"
print(p_value)
print(decision) | _____no_output_____ | MIT | notebooks/analysis/TheGraphDataAnalysis.ipynb | trangnv/geb-simulations-h20 |
Based on our analysis using the Kolmogorov-Smirnov Goodness-of-Fit Test, the distributions are very different. As can be seen above from their EDCF plots, you can see a different in their distributions, however pay close attention to the x axis and you can see the distribution difference is significant. | # scatterplot of linear regressoin residuals
sns.residplot(x='blockNumber', y='redemptionRateActual', data=subsetGraph, label='redemptionRateActual')
plt.title('redemptionRateActual regression residuals')
sns.residplot(x='blockNumber', y='counterfactual_redemption_rate', data=subsetGraph,label='counterfactual_redemption_rate')
plt.title('counterfactual_redemption_rate regression residuals')
graphData.plot(x='blockNumber',y='globalDebt',kind='line',title='globalDebt')
graphData.plot(x='blockNumber',y='erc20CoinTotalSupply',kind='line',title='erc20CoinTotalSupply')
graphData.plot(x='blockNumber',y='marketPriceEth',kind='line',title='marketPriceEth')
graphData.plot(x='blockNumber',y='marketPriceUsd',kind='line',title='marketPriceUsd') | _____no_output_____ | MIT | notebooks/analysis/TheGraphDataAnalysis.ipynb | trangnv/geb-simulations-h20 |
Recurrent Neural Networks | import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt | _____no_output_____ | MIT | course/.ipynb_checkpoints/8 Recurrent Neural Networks-checkpoint.ipynb | ResitKadir1/aws-DL |
Time series forecasting | df = pd.read_csv('../data/cansim-0800020-eng-6674700030567901031.csv',
skiprows=6, skipfooter=9,
engine='python')
df.head()
from pandas.tseries.offsets import MonthEnd
df['Adjustments'] = pd.to_datetime(df['Adjustments']) + MonthEnd(1)
df = df.set_index('Adjustments')
df.head()
df.plot()
split_date = pd.Timestamp('01-01-2011')
train = df.loc[:split_date, ['Unadjusted']]
test = df.loc[split_date:, ['Unadjusted']]
ax = train.plot()
test.plot(ax=ax)
plt.legend(['train', 'test'])
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler()
#not fitting our test (important)
train_sc = sc.fit_transform(train)
test_sc = sc.transform(test)
train_sc[:4]
#model learn from previous value such as 0. will learn from 0.01402033
X_train = train_sc[:-1]
y_train = train_sc[1:]
X_test = test_sc[:-1]
y_test = test_sc[1:] | _____no_output_____ | MIT | course/.ipynb_checkpoints/8 Recurrent Neural Networks-checkpoint.ipynb | ResitKadir1/aws-DL |
Fully connected predictor | from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping
K.clear_session()
model = Sequential()
model.add(Dense(12, #12 nodes
input_dim=1, #one input
activation='relu'))
model.add(Dense(1)) #one output ,no activation function this is regression problem
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
early_stop = EarlyStopping(monitor='loss', patience=1, verbose=1)
model.fit(X_train, y_train, epochs=200,
batch_size=2, verbose=1,
callbacks=[early_stop])
y_pred = model.predict(X_test)
plt.plot(y_test)
plt.plot(y_pred)
#fully connected
#very bad performance , our model just learnt to mirroring | _____no_output_____ | MIT | course/.ipynb_checkpoints/8 Recurrent Neural Networks-checkpoint.ipynb | ResitKadir1/aws-DL |
Recurrent predictor | from tensorflow.keras.layers import LSTM
X_train.shape
#3D tensor with shape (batch_size, timesteps, input_dim)
X_train[:, None].shape
X_train_t = X_train[:, None]
X_test_t = X_test[:, None]
K.clear_session()
model = Sequential()
model.add(LSTM(6,
input_shape=(1, 1)#1 timestep ,1 number
))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train_t, y_train,
epochs=100,
batch_size=1 #training each data point
, verbose=1,
callbacks=[early_stop])
y_pred = model.predict(X_test_t)
plt.plot(y_test)
plt.plot(y_pred)
#unfortunately LSTM didnt improve model performance
#now we try Windows | _____no_output_____ | MIT | course/.ipynb_checkpoints/8 Recurrent Neural Networks-checkpoint.ipynb | ResitKadir1/aws-DL |
Windows | train_sc.shape
train_sc_df = pd.DataFrame(train_sc, columns=['Scaled'], index=train.index)
test_sc_df = pd.DataFrame(test_sc, columns=['Scaled'], index=test.index)
train_sc_df.head()
#12 months back shifts
for s in range(1, 13):
train_sc_df['shift_{}'.format(s)] = train_sc_df['Scaled'].shift(s)
test_sc_df['shift_{}'.format(s)] = test_sc_df['Scaled'].shift(s)
train_sc_df.head(13)
#drop null data which mean drop first year
X_train = train_sc_df.dropna().drop('Scaled', axis=1)
y_train = train_sc_df.dropna()[['Scaled']]
X_test = test_sc_df.dropna().drop('Scaled', axis=1)
y_test = test_sc_df.dropna()[['Scaled']]
X_train.head()
X_train.shape
X_train = X_train.values
X_test= X_test.values
y_train = y_train.values
y_test = y_test.values | _____no_output_____ | MIT | course/.ipynb_checkpoints/8 Recurrent Neural Networks-checkpoint.ipynb | ResitKadir1/aws-DL |
Fully Connected on Windows | K.clear_session()
model = Sequential()
model.add(Dense(12, input_dim=12, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
model.fit(X_train, y_train, epochs=200,
batch_size=1, verbose=1, callbacks=[early_stop])
y_pred = model.predict(X_test)
plt.plot(y_test)
plt.plot(y_pred)
#this model way more better than precious models
#our expectataion lines overlapping | WARNING:tensorflow:5 out of the last 13 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7fde7bc085f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
| MIT | course/.ipynb_checkpoints/8 Recurrent Neural Networks-checkpoint.ipynb | ResitKadir1/aws-DL |
LSTM on Windows | X_train_t = X_train.reshape(X_train.shape[0], 1, 12)
X_test_t = X_test.reshape(X_test.shape[0], 1, 12)
X_train_t.shape #one time instance with 12 vector coordinates
K.clear_session()
model = Sequential()
model.add(LSTM(6, input_shape=(1, 12)#our shaped dimensions,parameter 6*12*3?
))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
model.fit(X_train_t, y_train, epochs=100,
batch_size=1, verbose=1, callbacks=[early_stop])
y_pred = model.predict(X_test_t)
plt.plot(y_test)
plt.plot(y_pred)
#best model | WARNING:tensorflow:5 out of the last 12 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7fde7ca8ad40> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
| MIT | course/.ipynb_checkpoints/8 Recurrent Neural Networks-checkpoint.ipynb | ResitKadir1/aws-DL |
Handwritten Digits Classifier with Improved Accuracy using Data Augmentation In previous steps, we trained a model that could recognize handwritten digits using the MNIST dataset. We were able to achieve above 98% accuracy on our validation dataset. However, when you deploy the model in an Android app and test it, you probably noticed some accuracy issue. Although the app was able to recognize digits that you drew, the accuracy is probably way lower than 98%.In this notebook we will explore the couse of the accuracy drop and use data augmentation to improve deployment accuracy. PreparationLet's start by importing TensorFlow and other supporting libraries that are used for data processing and visualization. | import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
print(tf.__version__) | 2.6.0
| Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
Import the MNIST dataset | mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Add a color dimension to the images in "train" and "validate" dataset to
# leverage Keras's data augmentation utilities later.
train_images = np.expand_dims(train_images, axis=3)
test_images = np.expand_dims(test_images, axis=3) | Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
| Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
Define an utility function so that we can create quickly create multiple models with the same model architecture for comparison. | def create_model():
model = keras.Sequential(
[
keras.layers.InputLayer(input_shape=(28,28,1)),
keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation=tf.nn.relu),
keras.layers.Conv2D(filters=64, kernel_size=(3,3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2,2)),
keras.layers.Dropout(0.25),
keras.layers.Flatten(),
keras.layers.Dense(10, activation=tf.nn.softmax)
]
)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model | _____no_output_____ | Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
Confirm that our model can achieve above 98% accuracy on MNIST Dataset. | base_model = create_model()
base_model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
) | Epoch 1/5
1875/1875 [==============================] - 43s 7ms/step - loss: 0.1418 - accuracy: 0.9569 - val_loss: 0.0580 - val_accuracy: 0.9815
Epoch 2/5
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0544 - accuracy: 0.9836 - val_loss: 0.0515 - val_accuracy: 0.9829
Epoch 3/5
1875/1875 [==============================] - 13s 7ms/step - loss: 0.0414 - accuracy: 0.9869 - val_loss: 0.0385 - val_accuracy: 0.9875
Epoch 4/5
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0324 - accuracy: 0.9900 - val_loss: 0.0362 - val_accuracy: 0.9891
Epoch 5/5
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0267 - accuracy: 0.9913 - val_loss: 0.0373 - val_accuracy: 0.9884
| Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
Troubleshoot the accuracy dropLet's see the digit images in MNIST again and guess the cause of the accuracy drop we experienced in deployment. | # Show the first 25 images in the training dataset.
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(np.squeeze(train_images[i], axis=2), cmap=plt.cm.gray)
plt.xlabel(train_labels[i])
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
We can see from the 25 images above that the digits are about the same size, and they are in the center of the images. Let's verify if this assumption is true across the MNIST dataset. | # An utility function that returns where the digit is in the image.
def digit_area(mnist_image):
# Remove the color axes
mnist_image = np.squeeze(mnist_image, axis=2)
# Extract the list of columns that contain at least 1 pixel from the digit
x_nonzero = np.nonzero(np.amax(mnist_image, 0))
x_min = np.min(x_nonzero)
x_max = np.max(x_nonzero)
# Extract the list of rows that contain at least 1 pixel from the digit
y_nonzero = np.nonzero(np.amax(mnist_image, 1))
y_min = np.min(y_nonzero)
y_max = np.max(y_nonzero)
return [x_min, x_max, y_min, y_max]
# Calculate the area containing the digit across MNIST dataset
digit_area_rows = []
for image in train_images:
digit_area_row = digit_area(image)
digit_area_rows.append(digit_area_row)
digit_area_df = pd.DataFrame(
digit_area_rows,
columns=['x_min', 'x_max', 'y_min', 'y_max']
)
digit_area_df.hist() | _____no_output_____ | Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
Now for the histogram, you can confirm that the digit in MNIST images are fitted nicely in an certain area at the center of the images.[MNIST Range](https://download.tensorflow.org/models/tflite/digit_classifier/mnist_range.png)However, when you wrote digits in your Android app, you probably did not pay attention to make sure your digit fit in the virtual area that the digits appear in MNIST dataset. The Machine Learning Model have not seen such data before so it performed poorly, especially when you wrote a a digit that was off the center of the drawing pad.Let's add some data augmentation to the MNIST dataset to verify if our assumption is true. We will distort our MNIST dataset by adding:* Rotation* Width and height shift* Shear* Zoom | # Define data augmentation
datagen = keras.preprocessing.image.ImageDataGenerator(
rotation_range=30,
width_shift_range=0.25,
height_shift_range=0.25,
shear_range=0.25,
zoom_range=0.2
)
# Generate augmented data from MNIST dataset
train_generator = datagen.flow(train_images, train_labels)
test_generator = datagen.flow(test_images, test_labels) | _____no_output_____ | Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
Let's see what our digit images look like after augmentation. You can see that we now clearly have much more variation on how the digits are placed in the images. | augmented_images, augmented_labels = next(train_generator)
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(np.squeeze(augmented_images[i], axis=2), cmap=plt.cm.gray)
plt.xlabel('Label: %d' % augmented_labels[i])
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
Let's evaluate the digit classifier model that we trained earlier on this augmented test dataset and see if it makes accuracy drop. | base_model.evaluate(test_generator) | 313/313 [==============================] - 6s 18ms/step - loss: 5.6356 - accuracy: 0.3272
| Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
You can see that accuracy significantly dropped to below 40% in augmented test dataset. Improve accuracy with data augmentationNow let's train our model using augmented dataset to make it perform better in deployment. | improved_model = create_model()
improved_model.fit(train_generator, epochs=5, validation_data=test_generator) | Epoch 1/5
1875/1875 [==============================] - 41s 22ms/step - loss: 0.9860 - accuracy: 0.6849 - val_loss: 0.5181 - val_accuracy: 0.8516
Epoch 2/5
1875/1875 [==============================] - 40s 22ms/step - loss: 0.5070 - accuracy: 0.8491 - val_loss: 0.3765 - val_accuracy: 0.8877
Epoch 3/5
1875/1875 [==============================] - 41s 22ms/step - loss: 0.4145 - accuracy: 0.8758 - val_loss: 0.3064 - val_accuracy: 0.9102
Epoch 4/5
1875/1875 [==============================] - 41s 22ms/step - loss: 0.3671 - accuracy: 0.8896 - val_loss: 0.2749 - val_accuracy: 0.9219
Epoch 5/5
1875/1875 [==============================] - 41s 22ms/step - loss: 0.3275 - accuracy: 0.9017 - val_loss: 0.2593 - val_accuracy: 0.9249
| Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
We can see that as the models saw more distorted digit images during training, its accuracy evaluated distorted test digit images were significantly improved to about 90%. Convert to TensorFlow LiteLet's convert the improved model to TensorFlow Lite and redeploy the Android app. | # Convert Keras model to TF Lite format and quantize.
converter = tf.lite.TFLiteConverter.from_keras_model(improved_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quantized_model = converter.convert()
# Save the Quantized Model to file to the Downloads Directory
f = open('mnist-improved.tflite', "wb")
f.write(tflite_quantized_model)
f.close()
# Download the digit classification model
from google.colab import files
files.download('mnist-improved.tflite')
| _____no_output_____ | Apache-2.0 | notebooks/handwritten_digits_tunning.ipynb | dgavieira/handwritten-digits-recognition-app |
_____no_output_____ | MIT | nltk_treinar_etiquetador.ipynb | ilexistools/ebralc2021 | ||
Treinar um etiquetador morfossintáticoPara realizar a etiquetagem morfossintática de textos, é preciso obter um etiquetador. O NLTK possui diversas opções para a criação de classificadores e etiquetadores de palavras: DefaultTagger, RegexpTagger, UnigramTagger, BigramTagger, TrigramTagger, BrillTagger, além de outros classificadores.A criação de etiquetadores requer dados de treinamento, textos previamente etiquetados, no formato de sentenças etiquetadas, especificamente em listas de tuplas para o NLTK. A partir dos dados e algoritmos de treinamento, cria-se um objeto (etiquetador) que pode ser armazenado para usos futuros, uma vez que o treinamento leva um tempo considerável. RecursosPara realizar o teste de utilização, precisamos carregar o corpus mc-morpho e o tokenizador 'punkt' da biblioteca do nltk: | import nltk
nltk.download('mac_morpho')
nltk.download('punkt')
import nltk
import pickle
from nltk.corpus import mac_morpho
# prepara dados de treinamento e teste
sents = mac_morpho.tagged_sents()
trein = sents[0:30000]
teste = sents[13000:]
# treina um etiquetador sequencial
etq1 = nltk.DefaultTagger('N')
etq2 = nltk.UnigramTagger(trein,backoff=etq1)
etq3 = nltk.BigramTagger(trein,backoff=etq2)
# imprime a acurácia dos etiquetadores
print('DefaultTagger', etq1.evaluate(teste))
print('UnigramTagger', etq2.evaluate(teste))
print('BigramTagger', etq3.evaluate(teste))
# armazena o etiquetador treinado
with open('etq.pickle','wb') as fh:
pickle.dump(etq3,fh) | DefaultTagger 0.20087708474599295
UnigramTagger 0.8237940367746194
BigramTagger 0.842816406510894
| MIT | nltk_treinar_etiquetador.ipynb | ilexistools/ebralc2021 |
No exemplo, carregamos os dados etiquetados para treinamento e teste do etiquetador. Separamos uma quantidade maior de sentenças para treino (70%) e outra menor para teste (30%).Em seguida, iniciamos o processo de treinamento de um etiquetador sequencial a partir de três modelos diferentes combinados. O etiquetador 'DefaultTagger' atribui uma etiqueta padrão ('N') para todas as palavras. O etiquetador 'UnigramTagger', treinado com as sentenças, atribui a etiqueta mais provável para a palavra a partir de um dicionário criado internamente. O etiquetador 'BigramTagger', também treinado com as sentenças etiquetadas, atribui a etiqueta mais provável para a palavra com base na etiqueta anterior (hipótese de Markov). A combinação dos etiquetadores é feita sequencialmente por meio do argumento 'backoff''.Tendo realizado o treinamento do etiquetador, avaliamos a precisão de cada etapa por meio da função ‘evaluate()’, passando como argumento a variável ‘teste’, que armazena parte das sentenças etiquetadas do corpus MacMorpho. No processo, obtemos a impressão do desempenho de cada um dos etiquetadores em separado. O etiquetador final, resultado da combinação de todos, obtém uma precisão de 84% com os dados de teste.Por fim, armazenamos o etiquetador treinado para usos posteriores por meio da função ‘dump’, do módulo ‘pickle’. Para verificar a funcionalidade do etiquetador, realizamos o seguinte teste: | import nltk
import pickle
# carrega o etiquetador treinado
with open('etq.pickle','rb') as fh:
etiquetador = pickle.load(fh)
# armazena um texto a ser etiquetado como teste
texto = 'Estamos realizando um teste agora.'
# itemiza o texto
itens = nltk.word_tokenize(texto,language='portuguese')
# etiqueta os itens
itens_etiquetados = etiquetador.tag(itens)
# imprime o resultado
print(itens_etiquetados)
| [('Estamos', 'V'), ('realizando', 'V'), ('um', 'ART'), ('teste', 'N'), ('agora', 'ADV'), ('.', '.')]
| MIT | nltk_treinar_etiquetador.ipynb | ilexistools/ebralc2021 |
An example of an optimal pruning based routing algorithm- based on a simple graph and Dijkstra's algorithm with concave cost function- Create a simple graph with multiple edge's attributes¶ - weight = w_ij - concave = c_ij where i,j is nodes | import networkx as nx
import matplotlib.pyplot as plt | _____no_output_____ | MIT | Python_Graph/A simple graph with optimal pruning based routing.ipynb | phoophoo187/Privacy_SDN_Edge_IoT |
Define functions | def add_multi_link_attributes(G,attr1,attr2):
"""
This funtion is to add the multiple link attributes to graph G
input: G : graph
attr1 : link attribute 1
attr2 : link attribute 2
output : G
"""
i = 0
for (u, v) in G.edges():
G.add_edge(u,v,w=attr1[i],c=attr2[i])
i = i+1
return G
def draw_graph(G,pos):
"""
This function is to draw a graph with the fixed position
input : G : graph
pos: postions of all nodes with the dictionary of coordinates (x,y)
"""
edge_labels = {} ## add edge lables from edge attribute
for u, v, data in G.edges(data=True):
edge_labels[u, v] = data
nx.draw_networkx(G,pos)
nx.draw_networkx_edge_labels(G,pos,edge_labels=edge_labels)
def remove_Edge(G,rm_edge_list):
"""
This function is to remove edges in the rm_edge_list from G
"""
G.remove_edges_from(rm_edge_list)
G.edges()
return G
def compare_path(path1,path2):
if collections.Counter(path1) == collections.Counter(path2):
print ("The lists l1 and l2 are the same")
flag = True
else:
print ("The lists l1 and l2 are not the same")
flag = False
return flag
def additive_path_cost(G, path, attr):
"""
This function is to find the path cost based on the additive costs
: Path_Cost = sum_{edges in the path}attr[edge]
Input : G : graph
path : path is a list of nodes in the path
attr : attribute of edges
output : path_cost
"""
return sum([G[path[i]][path[i+1]][attr] for i in range(len(path)-1)])
## Calculate concave path cost from attr
def max_path_cost(G, path, attr):
"""
This function is to find the path cost based on the Concave costs
: Path_Cost = max{edges in the path}attr[edge]
Input : G : graph
path : path is a list of nodes in the path
attr : attribute of edges
output : path_cost
"""
return max([G[path[i]][path[i+1]][attr] for i in range(len(path)-1)])
def rm_edge_constraint(G,Cons):
rm_edge_list = []
for u, v, data in G.edges(data=True):
e = (u,v)
cost = G.get_edge_data(*e)
print(cost)
if cost['c'] >= Cons:
rm_edge_list.append(e)
print(rm_edge_list)
remove_Edge(G,rm_edge_list)
return G
def has_path(G, source, target):
"""Return True if G has a path from source to target, False otherwise.
Parameters
----------
G : NetworkX graph
source : node
Starting node for path
target : node
Ending node for path
"""
try:
sp = nx.shortest_path(G,source, target)
except nx.NetworkXNoPath:
return False
return True
def Optimum_prun_based_routing(G,S,D,L):
"""
This function is to find the optimal path from S to D with constraint L
Input : G : graph
S : Source
D : Destination
L : constraint
"""
if has_path(G, S, D):
Shortest_path = nx.dijkstra_path(G, S, D, weight='w')
Opt_path = Shortest_path
while len(Shortest_path) != 0:
path_cost = additive_path_cost(G, Shortest_path, 'w')
print(path_cost)
if path_cost <= L:
"""go to concave cost"""
PathConcave_cost = max_path_cost(G, Shortest_path, 'c')
G = rm_edge_constraint(G,PathConcave_cost) # remove all links where the concave link is greater than PathConcave_cost
Opt_path = Shortest_path
if has_path(G, S, D):
Shortest_path = nx.dijkstra_path(G, S, D, weight='w')
else:
Shortest_path = []
else:
pass
else:
print('No path from', S, ' to ', D)
Opt_path = []
return Opt_path
| _____no_output_____ | MIT | Python_Graph/A simple graph with optimal pruning based routing.ipynb | phoophoo187/Privacy_SDN_Edge_IoT |
Create a graph | G = nx.Graph()
edge_list = [('S', 'B'), ('S', 'A'), ('S','E'), ('B','A'), ('B','D'), ('A','D'), ('E','D')]
Weight_edge_list = [2, 2, 3, 2, 1, 2, 2]
Concave_edge_list = [1, 3, 3, 1, 4, 3, 1]
pos = { 'S': (0,50), 'B': (50, 100), 'A': (50, 50), 'E': (50, 0), 'D': (100, 50)} # draw by position
G.add_edges_from(edge_list)
G = add_multi_link_attributes(G,Weight_edge_list,Concave_edge_list)
draw_graph(G,pos)
| _____no_output_____ | MIT | Python_Graph/A simple graph with optimal pruning based routing.ipynb | phoophoo187/Privacy_SDN_Edge_IoT |
Run the optimum-pruning-based-routing algorithm | Optimum_prun_based_routing(G,'S','D',5) | 3
{'w': 2, 'c': 1}
{'w': 2, 'c': 3}
{'w': 3, 'c': 3}
{'w': 2, 'c': 1}
{'w': 1, 'c': 4}
[('B', 'D')]
{'w': 2, 'c': 3}
{'w': 2, 'c': 1}
4
{'w': 2, 'c': 1}
{'w': 2, 'c': 3}
[('S', 'A')]
{'w': 3, 'c': 3}
[('S', 'A'), ('S', 'E')]
{'w': 2, 'c': 1}
{'w': 2, 'c': 3}
[('S', 'A'), ('S', 'E'), ('A', 'D')]
{'w': 2, 'c': 1}
| MIT | Python_Graph/A simple graph with optimal pruning based routing.ipynb | phoophoo187/Privacy_SDN_Edge_IoT |
Machine Learning Engineer Nanodegree Introduction and Foundations Project: Titanic Survival ExplorationIn 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.> **Tip:** Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. Getting StartedTo begin working with the RMS Titanic passenger data, we'll first need to `import` the functionality we need, and load our data into a `pandas` DataFrame. Run the code cell below to load our data and display the first few entries (passengers) for examination using the `.head()` function.> **Tip:** You can run a code cell by clicking on the cell and using the keyboard shortcut **Shift + Enter** or **Shift + Return**. Alternatively, a code cell can be executed using the **Play** button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. [Markdown](http://daringfireball.net/projects/markdown/syntax) allows you to write easy-to-read plain text that can be converted to HTML. | # Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head()) | _____no_output_____ | MIT | titanic_survival_exploration.ipynb | numanyilmaz/titanic_survival_exploration |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.