markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
> A generator expression, on the other hand, is used-up after one iteration:生成器表达式则不一样,只能迭代一次:
G = (n ** 2 for n in range(12)) list(G) list(G)
_____no_output_____
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> This can be very useful because it means iteration can be stopped and started:这是非常有用的特性,因为这意味着迭代能停止和开始:
G = (n**2 for n in range(12)) for n in G: print(n, end=' ') if n > 30: break # 生成器停止运行 print("\ndoing something in between") for n in G: # 生成器继续运行 print(n, end=' ')
0 1 4 9 16 25 36 doing something in between 49 64 81 100 121
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> One place I've found this useful is when working with collections of data files on disk; it means that you can quite easily analyze them in batches, letting the generator keep track of which ones you have yet to see.作者发现这个特性在使用磁盘上存储的数据文件时特别有用;它意味着你可以很容易的按批次来分析数据,让生成器记录下目前的处理进度。 Generator Functions: Using ``yield`` 生成器函数:使用 `yield`> We saw in the previous section that list comprehensions are best used to create relatively simple lists, while using a normal ``for`` loop can be better in more complicated situations.The same is true of generator expressions: we can make more complicated generators using *generator functions*, which make use of the ``yield`` statement.从上面的讨论中,我们可以知道列表解析适用于创建相对简单的列表,如果列表的生成规则比较复杂,还是使用普通`for`循环更加合适。对于生成器表达式来说也一样:我们可以使用*生成器函数*创建更加复杂的生成器,这里需要用到`yield`关键字。> Here we have two ways of constructing the same list:我们有两种方式来构建同一个列表:
L1 = [n ** 2 for n in range(12)] L2 = [] for n in range(12): L2.append(n ** 2) print(L1) print(L2)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121] [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> Similarly, here we have two ways of constructing equivalent generators:类似的,我们也有两种方法来构建相同的生成器:
G1 = (n ** 2 for n in range(12)) def gen(): for n in range(12): yield n ** 2 G2 = gen() print(*G1) print(*G2)
0 1 4 9 16 25 36 49 64 81 100 121 0 1 4 9 16 25 36 49 64 81 100 121
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> A generator function is a function that, rather than using ``return`` to return a value once, uses ``yield`` to yield a (potentially infinite) sequence of values.Just as in generator expressions, the state of the generator is preserved between partial iterations, but if we want a fresh copy of the generator we can simply call the function again.生成器函数与普通函数的区别在于,生成器函数不是使用`return`来一次性返回值,而是使用`yield`来产生一系列(可能无穷多个)值。就像生成器表达式一样,生成器的状态会被生成器自己保留并记录,如果你需要一个新的生成器,你可以再次调用函数。 Example: Prime Number Generator 例子:质数生成器> Here I'll show my favorite example of a generator function: a function to generate an unbounded series of prime numbers.A classic algorithm for this is the *Sieve of Eratosthenes*, which works something like this:下面作者将介绍他最喜欢的生成器函数的例子:一个可以产生无穷多个质数序列的函数。计算质数又一个经典算法*Sieve of Eratosthenes*,它的工作原理如下:
# 产生可能的质数序列 L = [n for n in range(2, 40)] print(L) # 剔除所有被第一个元素整除的数 L = [n for n in L if n == L[0] or n % L[0] > 0] print(L) # 剔除所有被第二个元素整除的数 L = [n for n in L if n == L[1] or n % L[1] > 0] print(L) # 剔除所有被第三个元素整除的数 L = [n for n in L if n == L[2] or n % L[2] > 0] print(L)
[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
> If we repeat this procedure enough times on a large enough list, we can generate as many primes as we wish.如果我们在一个很大的列表上重复这个过程足够多次,我们可以生成我们需要的质数。> Let's encapsulate this logic in a generator function:我们将这个逻辑封装到一个生成器函数中:
def gen_primes(N): """Generate primes up to N""" primes = set() # 使用primes集合存储找到的质数 for n in range(2, N): if all(n % p > 0 for p in primes): # primes中的元素都不能整除n -> n是质数 primes.add(n) # 将n加入primes集合 yield n # 产生序列 print(*gen_primes(100))
2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97
CC0-1.0
12-Generators.ipynb
MoRa-0/wwtop
hand-written digits dataset from UCI: http://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
# Importing load_digits() from the sklearn.datasets package from sklearn.datasets import load_digits import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline digits_data = load_digits() digits_data.keys() labels = pd.Series(digits_data['target']) data = pd.DataFrame(digits_data['data']) data.head(1) first_image = data.iloc[0] np_image = first_image.values np_image = np_image.reshape(8,8) plt.imshow(np_image, cmap='gray_r') f, axarr = plt.subplots(2, 4) axarr[0, 0].imshow(data.iloc[0].values.reshape(8,8), cmap='gray_r') axarr[0, 1].imshow(data.iloc[99].values.reshape(8,8), cmap='gray_r') axarr[0, 2].imshow(data.iloc[199].values.reshape(8,8), cmap='gray_r') axarr[0, 3].imshow(data.iloc[299].values.reshape(8,8), cmap='gray_r') axarr[1, 0].imshow(data.iloc[999].values.reshape(8,8), cmap='gray_r') axarr[1, 1].imshow(data.iloc[1099].values.reshape(8,8), cmap='gray_r') axarr[1, 2].imshow(data.iloc[1199].values.reshape(8,8), cmap='gray_r') axarr[1, 3].imshow(data.iloc[1299].values.reshape(8,8), cmap='gray_r') from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import KFold # 50% Train / test validation def train_knn(nneighbors, train_features, train_labels): knn = KNeighborsClassifier(n_neighbors = nneighbors) knn.fit(train_features, train_labels) return knn def test(model, test_features, test_labels): predictions = model.predict(test_features) train_test_df = pd.DataFrame() train_test_df['correct_label'] = test_labels train_test_df['predicted_label'] = predictions overall_accuracy = sum(train_test_df["predicted_label"] == train_test_df["correct_label"])/len(train_test_df) return overall_accuracy def cross_validate(k): fold_accuracies = [] kf = KFold(n_splits = 4, random_state=2) for train_index, test_index in kf.split(data): train_features, test_features = data.loc[train_index], data.loc[test_index] train_labels, test_labels = labels.loc[train_index], labels.loc[test_index] model = train_knn(k, train_features, train_labels) overall_accuracy = test(model, test_features, test_labels) fold_accuracies.append(overall_accuracy) return fold_accuracies knn_one_accuracies = cross_validate(1) np.mean(knn_one_accuracies) k_values = list(range(1,10)) k_overall_accuracies = [] for k in k_values: k_accuracies = cross_validate(k) k_mean_accuracy = np.mean(k_accuracies) k_overall_accuracies.append(k_mean_accuracy) plt.figure(figsize=(8,4)) plt.title("Mean Accuracy vs. k") plt.plot(k_values, k_overall_accuracies) #Neural Network With One Hidden Layer from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import KFold # 50% Train / test validation def train_nn(neuron_arch, train_features, train_labels): mlp = MLPClassifier(hidden_layer_sizes=neuron_arch) mlp.fit(train_features, train_labels) return mlp def test(model, test_features, test_labels): predictions = model.predict(test_features) train_test_df = pd.DataFrame() train_test_df['correct_label'] = test_labels train_test_df['predicted_label'] = predictions overall_accuracy = sum(train_test_df["predicted_label"] == train_test_df["correct_label"])/len(train_test_df) return overall_accuracy def cross_validate(neuron_arch): fold_accuracies = [] kf = KFold(n_splits = 4, random_state=2) for train_index, test_index in kf.split(data): train_features, test_features = data.loc[train_index], data.loc[test_index] train_labels, test_labels = labels.loc[train_index], labels.loc[test_index] model = train_nn(neuron_arch, train_features, train_labels) overall_accuracy = test(model, test_features, test_labels) fold_accuracies.append(overall_accuracy) return fold_accuracies from sklearn.neural_network import MLPClassifier nn_one_neurons = [ (8,), (16,), (32,), (64,), (128,), (256,) ] nn_one_accuracies = [] for n in nn_one_neurons: nn_accuracies = cross_validate(n) nn_mean_accuracy = np.mean(nn_accuracies) nn_one_accuracies.append(nn_mean_accuracy) plt.figure(figsize=(8,4)) plt.title("Mean Accuracy vs. Neurons In Single Hidden Layer") x = [i[0] for i in nn_one_neurons] plt.plot(x, nn_one_accuracies) # Neural Network With Two Hidden Layers nn_two_neurons = [ (64,64), (128, 128), (256, 256) ] nn_two_accuracies = [] for n in nn_two_neurons: nn_accuracies = cross_validate(n) nn_mean_accuracy = np.mean(nn_accuracies) nn_two_accuracies.append(nn_mean_accuracy) plt.figure(figsize=(8,4)) plt.title("Mean Accuracy vs. Neurons In Two Hidden Layers") x = [i[0] for i in nn_two_neurons] plt.plot(x, nn_two_accuracies) nn_two_accuracies #Neural Network With Three Hidden Layers from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import KFold # 50% Train / test validation def train_nn(neuron_arch, train_features, train_labels): mlp = MLPClassifier(hidden_layer_sizes=neuron_arch) mlp.fit(train_features, train_labels) return mlp def test(model, test_features, test_labels): predictions = model.predict(test_features) train_test_df = pd.DataFrame() train_test_df['correct_label'] = test_labels train_test_df['predicted_label'] = predictions overall_accuracy = sum(train_test_df["predicted_label"] == train_test_df["correct_label"])/len(train_test_df) return overall_accuracy def cross_validate_six(neuron_arch): fold_accuracies = [] kf = KFold(n_splits = 6, random_state=2) for train_index, test_index in kf.split(data): train_features, test_features = data.loc[train_index], data.loc[test_index] train_labels, test_labels = labels.loc[train_index], labels.loc[test_index] model = train_nn(neuron_arch, train_features, train_labels) overall_accuracy = test(model, test_features, test_labels) fold_accuracies.append(overall_accuracy) return fold_accuracies nn_three_neurons = [ (10, 10, 10), (64, 64, 64), (128, 128, 128) ] nn_three_accuracies = [] for n in nn_three_neurons: nn_accuracies = cross_validate_six(n) nn_mean_accuracy = np.mean(nn_accuracies) nn_three_accuracies.append(nn_mean_accuracy) plt.figure(figsize=(8,4)) plt.title("Mean Accuracy vs. Neurons In Three Hidden Layers") x = [i[0] for i in nn_three_neurons] plt.plot(x, nn_three_accuracies) nn_three_accuracies
_____no_output_____
MIT
Basic_Image_Classifier.ipynb
DivyaWadehra/60daysofudacity
Image Classification with PyTorch
import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.autograd import Variable import torchvision from torchvision import datasets, transforms import numpy as np import matplotlib.pyplot as plt %matplotlib inline train_loader = torch.utils.data.DataLoader( datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([ transforms.ToTensor() ])), batch_size=32, shuffle=False) test_loader = torch.utils.data.DataLoader( datasets.MNIST('./data', train=False, transform=transforms.Compose([ transforms.ToTensor() ])), batch_size=32, shuffle=False) class BasicNN(nn.Module): def __init__(self): super(BasicNN, self).__init__() self.net = nn.Linear(28 * 28, 10) def forward(self, x): batch_size = x.size(0) x = x.view(batch_size, -1) output = self.net(x) return F.softmax(output) model = BasicNN() optimizer = optim.SGD(model.parameters(), lr=0.001) def test(): total_loss = 0 correct = 0 for image, label in test_loader: image, label = Variable(image), Variable(label) output = model(image) total_loss += F.cross_entropy(output, label) correct += (torch.max(output, 1)[1].view(label.size()).data == label.data).sum() total_loss = total_loss.data[0]/ len(test_loader) accuracy = correct / len(test_loader.dataset) return total_loss, accuracy def train(): model.train() for image, label in train_loader: image, label = Variable(image), Variable(label) optimizer.zero_grad() output = model(image) loss = F.cross_entropy(output, label) loss.backward() optimizer.step() best_test_loss = None for e in range(1, 150): train() test_loss, test_accuracy = test() print("\n[Epoch: %d] Test Loss:%5.5f Test Accuracy:%5.5f" % (e, test_loss, test_accuracy)) # Save the model if the test_loss is the lowest if not best_test_loss or test_loss < best_test_loss: best_test_loss = test_loss else: break print("\nFinal Results\n-------------\n""Loss:", best_test_loss, "Test Accuracy: ", test_accuracy)
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:9: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. if __name__ == '__main__':
MIT
Basic_Image_Classifier.ipynb
DivyaWadehra/60daysofudacity
REGIONE LOMBARDIA Confronto dei dati relativi ai decessi registrati dall'ISTAT e i decessi causa COVID-19 registrati dalla Protezione Civile Italiana con i decessi previsti dal modello predittivo SARIMA. DECESSI MENSILI REGIONE LOMBARDIA ISTAT Il DataFrame contiene i dati relativi ai decessi mensili della regione Lombardia dal 2015 al 30 settembre 2020.
import matplotlib.pyplot as plt import pandas as pd decessi_istat = pd.read_csv('../../csv/regioni/lombardia.csv') decessi_istat.head() decessi_istat['DATA'] = pd.to_datetime(decessi_istat['DATA']) decessi_istat.TOTALE = pd.to_numeric(decessi_istat.TOTALE)
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Recupero dei dati inerenti al periodo COVID-19
decessi_istat = decessi_istat[decessi_istat['DATA'] > '2020-02-29'] decessi_istat.head()
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Creazione serie storica dei decessi ISTAT
decessi_istat = decessi_istat.set_index('DATA') decessi_istat = decessi_istat.TOTALE decessi_istat
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
DECESSI MENSILI REGIONE LOMBARDIA CAUSATI DAL COVID Il DataFrame contine i dati forniti dalla Protezione Civile relativi ai decessi mensili della regione Lombardia da marzo 2020 al 30 settembre 2020.
covid = pd.read_csv('../../csv/regioni_covid/lombardia.csv') covid.head() covid['data'] = pd.to_datetime(covid['data']) covid.deceduti = pd.to_numeric(covid.deceduti) covid = covid.set_index('data') covid.head()
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Creazione serie storica dei decessi COVID-19
covid = covid.deceduti
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
PREDIZIONE DECESSI MENSILI REGIONE SECONDO MODELLO SARIMA Il DataFrame contiene i dati riguardanti i decessi mensili della regione Lombardia secondo la predizione del modello SARIMA applicato.
predictions = pd.read_csv('../../csv/pred/predictions_SARIMA_lombardia.csv') predictions.head() predictions.rename(columns={'Unnamed: 0': 'Data', 'predicted_mean':'Totale'}, inplace=True) predictions.head() predictions['Data'] = pd.to_datetime(predictions['Data']) predictions.Totale = pd.to_numeric(predictions.Totale)
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Recupero dei dati inerenti al periodo COVID-19
predictions = predictions[predictions['Data'] > '2020-02-29'] predictions.head() predictions = predictions.set_index('Data') predictions.head()
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Creazione serie storica dei decessi secondo la predizione del modello
predictions = predictions.Totale
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
INTERVALLI DI CONFIDENZA Limite massimo
upper = pd.read_csv('../../csv/upper/predictions_SARIMA_lombardia_upper.csv') upper.head() upper.rename(columns={'Unnamed: 0': 'Data', 'upper TOTALE':'Totale'}, inplace=True) upper['Data'] = pd.to_datetime(upper['Data']) upper.Totale = pd.to_numeric(upper.Totale) upper.head() upper = upper[upper['Data'] > '2020-02-29'] upper = upper.set_index('Data') upper.head() upper = upper.Totale
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Limite minimo
lower = pd.read_csv('../../csv/lower/predictions_SARIMA_lombardia_lower.csv') lower.head() lower.rename(columns={'Unnamed: 0': 'Data', 'lower TOTALE':'Totale'}, inplace=True) lower['Data'] = pd.to_datetime(lower['Data']) lower.Totale = pd.to_numeric(lower.Totale) lower.head() lower = lower[lower['Data'] > '2020-02-29'] lower = lower.set_index('Data') lower.head() lower = lower.Totale
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
CONFRONTO DELLE SERIE STORICHE Di seguito il confronto grafico tra le serie storiche dei decessi totali mensili, dei decessi causa COVID-19 e dei decessi previsti dal modello SARIMA della regione Lombardia.I mesi di riferimento sono: marzo, aprile, maggio, giugno, luglio, agosto e settembre.
plt.figure(figsize=(15,4)) plt.title('LOMBARDIA - Confronto decessi totali, decessi causa covid e decessi del modello predittivo', size=18) plt.plot(covid, label='decessi accertati covid') plt.plot(decessi_istat, label='decessi totali') plt.plot(predictions, label='predizione modello') plt.legend(prop={'size': 12}) plt.show() plt.figure(figsize=(15,4)) plt.title("LOMBARDIA - Confronto decessi totali ISTAT con decessi previsti dal modello", size=18) plt.plot(predictions, label='predizione modello') plt.plot(upper, label='limite massimo') plt.plot(lower, label='limite minimo') plt.plot(decessi_istat, label='decessi totali') plt.legend(prop={'size': 12}) plt.show()
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Calcolo dei decessi COVID-19 secondo il modello predittivo Differenza tra i decessi totali rilasciati dall'ISTAT e i decessi secondo la previsione del modello SARIMA.
n = decessi_istat - predictions n_upper = decessi_istat - lower n_lower = decessi_istat - upper plt.figure(figsize=(15,4)) plt.title("LOMBARDIA - Confronto decessi accertati covid con decessi covid previsti dal modello", size=18) plt.plot(covid, label='decessi covid accertati - Protezione Civile') plt.plot(n, label='devessi covid previsti - modello SARIMA') plt.plot(n_upper, label='limite massimo - modello SARIMA') plt.plot(n_lower, label='limite minimo - modello SARIMA') plt.legend(prop={'size': 12}) plt.show()
_____no_output_____
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Gli intervalli corrispondono alla differenza tra i decessi totali forniti dall'ISTAT per i mesi di marzo, aprile, maggio e giugno 2020 e i valori degli intervalli di confidenza (intervallo superiore e intervallo inferiore) del modello predittivo SARIMA dei medesimi mesi.
d = decessi_istat.sum() print("Decessi 2020:", d) d_m = predictions.sum() print("Decessi attesi dal modello 2020:", d_m) d_lower = lower.sum() print("Decessi attesi dal modello 2020 - livello mimino:", d_lower)
Decessi attesi dal modello 2020 - livello mimino: 47027.967064348566
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Numero totale dei decessi accertati COVID-19 per la regione Lombardia
m = covid.sum() print(int(m))
16955
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Numero totale dei decessi COVID-19 previsti dal modello per la regione Lombardia Valore medio
total = n.sum() print(int(total))
25436
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Valore massimo
total_upper = n_upper.sum() print(int(total_upper))
35505
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Valore minimo
total_lower = n_lower.sum() print(int(total_lower))
15367
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Calcolo del numero dei decessi COVID-19 non registrati secondo il modello predittivo SARIMA della regione Lombardia Valore medio
x = decessi_istat - predictions - covid x = x.sum() print(int(x))
8481
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Valore massimo
x_upper = decessi_istat - lower - covid x_upper = x_upper.sum() print(int(x_upper))
18550
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Valore minimo
x_lower = decessi_istat - upper - covid x_lower = x_lower.sum() print(int(x_lower))
-1587
Unlicense
Modulo 4 - Analisi per regioni/regioni/Lombardia/.ipynb_checkpoints/Confronto LOMBARDIA-checkpoint.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
Extracting Data Using APIs import package
import requests
_____no_output_____
MIT
notebooks/02 Extracting Data Using APIs.ipynb
HybridNeos/pluralsight_titanic
basic usage
# url url = 'https://api.data.gov/ed/collegescorecard/v1/schools?school.name=boston%20college&api_key=qXweBRNPXvP8wo1Ewouaa2ASWZOJHUQUzz4Sbncs' # using get command , returns response object result = requests.get(url) # exploring response object : status_code result.status_code # exploring response object : headers result.headers # exploring response object : text result.text # in json object x = dict(result.json()) x = x['results'] ; x[0].keys()
_____no_output_____
MIT
notebooks/02 Extracting Data Using APIs.ipynb
HybridNeos/pluralsight_titanic
Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages
#importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Read in an Image
#reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images`cv2.cvtColor()` to grayscale or change color`cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson!
import math from scipy import stats def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ # for line in lines: # for x1,y1,x2,y2 in line: # cv2.line(img, (x1, y1), (x2, y2), color, thickness) sizeY = img.shape[0] sizeX = img.shape[1] pointsLeft = [] pointsRight = [] for line in lines: for x1,y1,x2,y2 in line: #cv2.line(img, (x1 , y1) , (x2 , y2) , [0, 255, 0], thickness) # Gets the midpoint of a line posX = (x1 + x2) * 0.5 posY = (y1 + y2) * 0.5 # Determines whether the midpoint is loaded on the right or left side of the image and classifies it if posX < sizeX * 0.5 : pointsLeft.append((posX, posY)) else: pointsRight.append((posX, posY)) # Get m and b from linear regression left = stats.linregress(pointsLeft) right = stats.linregress(pointsRight) left_m = left.slope right_m = right.slope left_b = left.intercept right_b = right.intercept # Define the points of left line x = (y - b) / m left_y1 = int(sizeY) left_x1 = int((left_y1 - left_b) / left_m) left_y2 = int(sizeY * 0.6) left_x2 = int((left_y2 - left_b) / left_m) # Define the points of right line x = (y - b) / m right_y1 = int(sizeY) right_x1 = int((right_y1 - right_b) / right_m) right_y2 = int(sizeY * 0.6) right_x2 = int((right_y2 - right_b) / right_m) # Draw two lane lines cv2.line(img, (left_x1 , left_y1 ) , (left_x2 , left_y2 ) , color, thickness) cv2.line(img, (right_x1 , right_y1) , (right_x2 , right_y2) , color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ)
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.**
import os os.listdir("test_images/")
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images_output directory. image = mpimg.imread("test_images/"+os.listdir("test_images/")[4]) weighted_image = process_image(image) plt.imshow(weighted_image)
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image where lines are drawn on lanes) gray = grayscale(image) kernel_size = 9 blur_gray = gaussian_blur(gray, kernel_size) low_threshold = 100 high_threshold = 150 edges = canny(blur_gray, low_threshold, high_threshold) ysize = image.shape[0] xsize = image.shape[1] vertices = np.array([[(xsize * 0.10 , ysize * 0.90), (xsize * 0.46 , ysize * 0.60), (xsize * 0.54 , ysize * 0.60), (xsize * 0.90 , ysize * 0.90)]], dtype=np.int32) # imshape = image.shape # vertices = np.array([[(0,imshape[0]),(0, 0), (imshape[1], 0), (imshape[1],imshape[0])]], dtype=np.int32) # vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32) masked_edges = region_of_interest(edges, vertices) rho = 2 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 10 # minimum number of votes (intersections in Hough grid cell) min_line_len = 5 #minimum number of pixels making up a line max_line_gap = 5 # maximum gap in pixels between connectable line segments line_image = np.copy(image)*0 # creating a blank to draw lines on line_img = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap) weighted_image = weighted_img(line_img, image) return weighted_image
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Let's try the one with the solid white lane on the right first ...
white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False)
[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4 [MoviePy] Writing video test_videos_output/solidWhiteRight.mp4
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output))
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output))
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds # clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output))
_____no_output_____
MIT
P1.ipynb
meetguogengli/Project1-Finding-Lane-Lines-on-the-Road
argparse made easy!
# pass your function and args from your sys.argv, and you're off to the races! def myprint(arg1, arg2): print("arg1:", arg1) print("arg2:", arg2) autoargs.autocall(myprint, ["first", "second"]) # if you want your arguments to be types, use any function that expects a string # and returns the type you want in your arg annotation def str_repeat(s: str, n: int): print((s * n).strip()) autoargs.autocall(str_repeat, ["args are easy!\n", "3"]) # if your args value is a string, it gets split using shlex autoargs.autocall(str_repeat, "'still easy!\n' 3") import functools import operator # varargs are supported too! def product(*args: float): return functools.reduce(operator.mul, args, 1.0) print(autoargs.autocall(product, ["5", "10", "0.5"])) def join(delimiter, *args): return delimiter.join(args) print(autoargs.autocall(join, [", ", "pretty easy", "right?"])) def aggregate(*args: float, op: {'sum', 'mul'}): if op == "sum": return sum(args) elif op == "mul": return product(*args) autoargs.autocall(aggregate, ["--help"]) # kwargs are supported using command-line syntax def land_of_defaults(a="default-a", argb="b default"): print(a, argb) autoargs.autocall(land_of_defaults, []) # => "" (no args in call) autoargs.autocall(land_of_defaults, ['-aOverride!']) # => "-aOverride!" autoargs.autocall(land_of_defaults, ['-a', 'Override!']) # => "-a Override!" autoargs.autocall(land_of_defaults, ['--argb', 'Override!']) # => "--argb Override!" # warning! if an argument has a default, it can only be given via this kwarg syntax # if you want to require a kwarg, use a kwonly-arg def required_arg(normal, default="boring", *, required): print(normal, default, required) autoargs.autocall(required_arg, ["normal", "--required", "val"]) autoargs.autocall(required_arg, ["normal"])
normal boring val
BSD-3-Clause
examples/usage.ipynb
metaperture/autoargs
Invalid Arg HandlingSpeaking of errors, invalid arguments are caught by the parser. This means that you get CLI-like error messages, like the user would be expecting if this were a CLI interface.
def oops(arg: int): return "%s is an integer!" % arg autoargs.autocall(oops, []) autoargs.autocall(oops, ["spam"]) autoargs.autocall(oops, ["20", "spam"])
usage: oops [-h] arg oops: error: unrecognized arguments: spam
BSD-3-Clause
examples/usage.ipynb
metaperture/autoargs
parser
# if you want access to the parser, go right ahead! parser = autoargs.autoparser(myprint) parser parsed = parser.parse_args(["first", "second"]) parsed vars(parsed)
_____no_output_____
BSD-3-Clause
examples/usage.ipynb
metaperture/autoargs
Convolutional LayerIn this notebook, we visualize four filtered outputs (a.k.a. feature maps) of a convolutional layer. Import the image
import cv2 import matplotlib.pyplot as plt %matplotlib inline # TODO: Feel free to try out your own images here by changing img_path # to a file path to another image on your computer! img_path = 'images/udacity_sdc.png' #img_path = 'C:/Users/oanag/Pictures/2019/FranceCoteDAzur_2019-04-26/FranceCoteDAzur-134.JPG' # load color image bgr_img = cv2.imread(img_path) # convert to grayscale gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY) # normalize, rescale entries to lie in [0,1] gray_img = gray_img.astype("float32")/255 # plot image plt.imshow(gray_img, cmap='gray') plt.show()
_____no_output_____
MIT
1_5_CNN_Layers/1. Conv Layer Visualization.ipynb
OanaGaskey/ComputerVision-Exercises
Define and visualize the filters
import numpy as np ## TODO: Feel free to modify the numbers here, to try out another filter! filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]]) print('Filter shape: ', filter_vals.shape) #nicely print matrix print(filter_vals) # Defining four different filters, # all of which are linear combinations of the `filter_vals` defined above # define four filters filter_1 = filter_vals filter_2 = -filter_1 filter_3 = filter_1.T filter_4 = -filter_3 filters = np.array([filter_1, filter_2, filter_3, filter_4]) # For an example, print out the values of filter 1 print(filters) ### do not modify the code below this line ### # visualize all four filters fig = plt.figure(figsize=(10, 5)) for i in range(4): ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[]) ax.imshow(filters[i], cmap='gray') ax.set_title('Filter %s' % str(i+1)) width, height = filters[i].shape for x in range(width): for y in range(height): ax.annotate(str(filters[i][x][y]), xy=(y,x), horizontalalignment='center', verticalalignment='center', color='white' if filters[i][x][y]<0 else 'black')
_____no_output_____
MIT
1_5_CNN_Layers/1. Conv Layer Visualization.ipynb
OanaGaskey/ComputerVision-Exercises
Define a convolutional layer Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!
import torch import torch.nn as nn import torch.nn.functional as F # define a neural network with a single convolutional layer with four filters class Net(nn.Module): def __init__(self, weight): super(Net, self).__init__() # initializes the weights of the convolutional layer to be the weights of the 4 defined filters k_height, k_width = weight.shape[2:] # assumes there are 4 grayscale filters self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False) self.conv.weight = torch.nn.Parameter(weight) def forward(self, x): # calculates the output of a convolutional layer # pre- and post-activation conv_x = self.conv(x) activated_x = F.relu(conv_x) # returns both layers return conv_x, activated_x # instantiate the model and set the weights weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor) model = Net(weight) # print out the layer in the network print(model)
Net( (conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False) )
MIT
1_5_CNN_Layers/1. Conv Layer Visualization.ipynb
OanaGaskey/ComputerVision-Exercises
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
# helper function for visualizing the output of a given layer # default number of filters is 4 def viz_layer(layer, n_filters= 4): fig = plt.figure(figsize=(20, 20)) for i in range(n_filters): ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[]) # grab layer outputs ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray') ax.set_title('Output %s' % str(i+1))
_____no_output_____
MIT
1_5_CNN_Layers/1. Conv Layer Visualization.ipynb
OanaGaskey/ComputerVision-Exercises
Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
# plot original image plt.imshow(gray_img, cmap='gray') # visualize all filters fig = plt.figure(figsize=(12, 6)) fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05) for i in range(4): ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[]) ax.imshow(filters[i], cmap='gray') ax.set_title('Filter %s' % str(i+1)) # convert the image into an input Tensor gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1) # get the convolutional layer (pre and post activation) conv_layer, activated_layer = model(gray_img_tensor) # visualize the output of a conv layer viz_layer(conv_layer) # after a ReLu is applied # visualize the output of an activated conv layer viz_layer(activated_layer)
_____no_output_____
MIT
1_5_CNN_Layers/1. Conv Layer Visualization.ipynb
OanaGaskey/ComputerVision-Exercises
Challenge Problem 02 Working with Datetimes 1. Write a Python program to add year(s) with a given date and display the new date. Go to the editor```Sample Data : (addYears is the user defined function name)print(addYears(datetime.date(2015,1,1), -1))print(addYears(datetime.date(2015,1,1), 0))print(addYears(datetime.date(2015,1,1), 2))print(addYears(datetime.date(2000,2,29),1))Expected Output :2014-01-012015-01-012017-01-012001-03-01```
import datetime # insert code here
_____no_output_____
Apache-2.0
challenge problems/Challenge Problem 02 Student Version.ipynb
shaheen19/Adv_Py_Scripting_for_GIS_Course
2. Write a Python program to get the date of the last Tuesday.
import datetime # insert code here
_____no_output_____
Apache-2.0
challenge problems/Challenge Problem 02 Student Version.ipynb
shaheen19/Adv_Py_Scripting_for_GIS_Course
A Chaos Game with TrianglesJohn D. Cook [proposed](https://www.johndcook.com/blog/2017/07/08/the-chaos-game-and-the-sierpinski-triangle/) an interesting "game" from the book *[Chaos and Fractals](https://smile.amazon.com/Chaos-Fractals-New-Frontiers-Science/dp/0387202293)*: start at a vertex of an equilateral triangle. Then move to a new point halfway between the current point and one of the three vertexes of the triangle, chosen at random. Repeat to create *N* points, and plot them. What do you get? I'll refactor Cook's code a bit and then we'll see:
import matplotlib.pyplot as plt import random def random_walk(vertexes, N): "Walk halfway from current point towards a random vertex; repeat for N points." points = [random.choice(vertexes)] for _ in range(N-1): points.append(midpoint(points[-1], random.choice(vertexes))) return points def show_walk(vertexes, N=5000): "Walk halfway towards a random vertex for N points; show reults." Xs, Ys = transpose(random_walk(vertexes, N)) Xv, Yv = transpose(vertexes) plt.plot(Xs, Ys, 'r.') plt.plot(Xv, Yv, 'bs') plt.gca().set_aspect('equal') plt.gcf().set_size_inches(9, 9) plt.axis('off') plt.show() def midpoint(p, q): return ((p[0] + q[0])/2, (p[1] + q[1])/2) def transpose(matrix): return zip(*matrix) triangle = ((0, 0), (0.5, (3**0.5)/2), (1, 0)) show_walk(triangle, 20)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
OK, the first 20 points don't tell me much. What if I try 20,000 points?
show_walk(triangle, 20000)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
Wow! The [Sierpinski Triangle](https://en.wikipedia.org/wiki/Sierpinski_triangle)! What happens if we start with a different set of vertexes, like a square?
square = ((0, 0), (0, 1), (1, 0), (1, 1)) show_walk(square)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
There doesn't seem to be any structure there. Let's try again to make sure:
show_walk(square, 20000)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
I'm still not seeing anything but random points. How about a right triangle?
right_triangle = ((0, 0), (0, 1), (1, 0)) show_walk(right_triangle, 20000)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
We get a squished Serpinski triangle. How about a pentagon? (I'm lazy so I had Wolfram Alpha [compute the vertexes](https://www.wolframalpha.com/input/?i=vertexes+of+regular+pentagon).)
pentagon = ((0.5, -0.688), (0.809, 0.262), (0., 0.850), (-0.809, 0.262), (-0.5, -0.688)) show_walk(pentagon)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
To clarify, let's try again with different numbers of points:
show_walk(pentagon, 10000) show_walk(pentagon, 20000)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
I definitely see a central hole, and five secondary holes surrounding that, and then, maybe 15 holes surrounding those? Or maybe not 15; hard to tell. Is a "Sierpinski Pentagon" a thing? I hadn't heard of it but a [quick search](https://www.google.com/search?q=sierpinski+pentagon) reveals that yes indeed, it is [a thing](http://ecademy.agnesscott.edu/~lriddle/ifs/pentagon/sierngon.htm), and it does have 15 holes surrounding the 5 holes. Let's try the hexagon:
hexagon = ((0.5, -0.866), (1, 0), (0.5, 0.866), (-0.5, 0.866), (-1, 0), (-0.5, -0.866)) show_walk(hexagon) show_walk(hexagon, 20000)
_____no_output_____
MIT
ipynb/Sierpinski.ipynb
kajalkatiyar/pytudes
Part 2: Intro to Private Training with Remote ExecutionIn the last section, we learned about PointerTensors, which create the underlying infrastructure we need for privacy preserving Deep Learning. In this section, we're going to see how to use these basic tools to train our first deep learning model using remote execution.Authors:- Yann Dupis - Twitter: [@YannDupis](https://twitter.com/YannDupis)- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask) Why use remote execution?Let's say you are an AI startup who wants to build a deep learning model to detect [diabetic retinopathy (DR)](https://ai.googleblog.com/2016/11/deep-learning-for-detection-of-diabetic.html), which is the fastest growing cause of blindness. Before training your model, the first step would be to acquire a dataset of retinopathy images with signs of DR. One approach could be to work with a hospital and ask them to send you a copy of this dataset. However because of the sensitivity of the patients' data, the hospital might be exposed to liability risks.That's where remote execution comes into the picture. Instead of bringing training data to the model (a central server), you bring the model to the training data (wherever it may live). In this case, it would be the hospital.The idea is that this allows whoever is creating the data to own the only permanent copy, and thus maintain control over who ever has access to it. Pretty cool, eh? Section 2.1 - Private Training on MNISTFor this tutorial, we will train a model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) to classify digits based on images.We can assume that we have a remote worker named Bob who owns the data.
import tensorflow as tf import syft as sy hook = sy.TensorFlowHook(tf) bob = sy.VirtualWorker(hook, id="bob")
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Let's download the MNIST data from `tf.keras.datasets`. Note that we are converting the data from numpy to `tf.Tensor` in order to have the PySyft functionalities.
mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 x_train, y_train = tf.convert_to_tensor(x_train), tf.convert_to_tensor(y_train) x_test, y_test = tf.convert_to_tensor(x_test), tf.convert_to_tensor(y_test)
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
As decribed in Part 1, we can send this data to Bob with the `send` method on the `tf.Tensor`.
x_train_ptr = x_train.send(bob) y_train_ptr = y_train.send(bob)
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Excellent! We have everything to start experimenting. To train our model on Bob's machine, we just have to perform the following steps:- Define a model, including optimizer and loss- Send the model to Bob- Start the training process- Get the trained model backLet's do it!
# Define the model model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) # Compile with optimizer, loss and metrics model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Once you have defined your model, you can simply send it to Bob calling the `send` method. It's the exact same process as sending a tensor.
model_ptr = model.send(bob) model_ptr
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Now, we have a pointer pointing to the model on Bob's machine. We can validate that's the case by inspecting the attribute `_objects` on the virtual worker.
bob._objects[model_ptr.id_at_location]
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Everything is ready to start training our model on this remote dataset. You can call `fit` and pass `x_train_ptr` `y_train_ptr` which are pointing to Bob's data. Note that's the exact same interface as normal `tf.keras`.
model_ptr.fit(x_train_ptr, y_train_ptr, epochs=2, validation_split=0.2)
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Fantastic! you have trained your model acheiving an accuracy greater than 95%.You can get your trained model back by just calling `get` on it.
model_gotten = model_ptr.get() model_gotten
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
It's good practice to see if your model can generalize by assessing its accuracy on an holdout dataset. You can simply call `evaluate`.
model_gotten.evaluate(x_test, y_test, verbose=2)
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Boom! The model remotely trained on Bob's data is more than 95% accurate on this holdout dataset. If your model doesn't fit into the Sequential paradigm, you can use Keras's functional API, or even subclass [tf.keras.Model](https://www.tensorflow.org/guide/keras/custom_layers_and_modelsbuilding_models) to create custom models.
class CustomModel(tf.keras.Model): def __init__(self, num_classes=10): super(CustomModel, self).__init__(name='custom_model') self.num_classes = num_classes self.flatten = tf.keras.layers.Flatten(input_shape=(28, 28)) self.dense_1 = tf.keras.layers.Dense(128, activation='relu') self.dropout = tf.keras.layers.Dropout(0.2) self.dense_2 = tf.keras.layers.Dense(num_classes, activation='softmax') def call(self, inputs, training=False): x = self.flatten(inputs) x = self.dense_1(x) x = self.dropout(x, training=training) return self.dense_2(x) model = CustomModel(10) # need to call the model on dummy data before sending it # in order to set the input shape (required when saving to SavedModel) model.predict(tf.ones([1, 28, 28])) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model_ptr = model.send(bob) model_ptr.fit(x_train_ptr, y_train_ptr, epochs=2, validation_split=0.2)
_____no_output_____
Apache-2.0
examples/Part 02 - Intro to Private Training with Remote Execution.ipynb
shubham3121/PySyft-TensorFlow
Training on Cloud ML Engine This notebook illustrates distributed training and hyperparameter tuning on Cloud ML Engine.
# change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} # copy canonical set of preprocessed files if you didn't do previous notebook gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET} fi %bash gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
gs://cloud-training-demos-ml/babyweight/preproc/eval.csv-00000-of-00012 gs://cloud-training-demos-ml/babyweight/preproc/train.csv-00000-of-00043
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
Now that we have the TensorFlow code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud ML Engine. Train on Cloud ML Engine Training on Cloud ML Engine requires: Making the code a Python package Using gcloud to submit the training code to Cloud ML EngineThe code in model.py is the same as in the TensorFlow notebook. I just moved it to a file so that I could package it up as a module.(explore the directory structure).
%bash grep "^def" babyweight/trainer/model.py
def read_dataset(prefix, pattern, batch_size=512): def get_wide_deep(): def serving_input_fn(): def experiment_fn(output_dir): def train_and_evaluate(output_dir):
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on my laptop). Even then, this takes about a minute in which you won't see any output ...
%bash echo "bucket=${BUCKET}" rm -rf babyweight_trained export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight python -m trainer.task \ --bucket=${BUCKET} \ --output_dir=babyweight_trained \ --job-dir=./tmp \ --pattern="00000-of-" --train_examples=500
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
Once the code works in standalone mode, you can run it on Cloud ML Engine. Because this is on the entire dataset, it will take a while. The training run took about an hour for me. You can monitor the job from the GCP console in the Cloud Machine Learning Engine section.
%bash OUTDIR=gs://${BUCKET}/babyweight/trained_model JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/babyweight/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=STANDARD_1 \ --runtime-version=1.4 \ -- \ --bucket=${BUCKET} \ --output_dir=${OUTDIR} \ --train_examples=200000
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
When I ran it, training finished, and the evaluation happened three times (filter in Stackdriver on the word "dict"):Saving dict for global step 390632: average_loss = 1.06578, global_step = 390632, loss = 545.55The final RMSE was 1.066 pounds.
from google.datalab.ml import TensorBoard TensorBoard().start('gs://{}/babyweight/trained_model'.format(BUCKET)) for pid in TensorBoard.list()['pid']: TensorBoard().stop(pid) print 'Stopped TensorBoard with pid {}'.format(pid)
Stopped TensorBoard with pid 10437
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
Hyperparameter tuning All of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.xml and pass it as --configFile.This step will take 1 hour -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
%writefile hyperparam.yaml trainingInput: scaleTier: STANDARD_1 hyperparameters: hyperparameterMetricTag: average_loss goal: MINIMIZE maxTrials: 30 maxParallelTrials: 3 params: - parameterName: batch_size type: INTEGER minValue: 8 maxValue: 512 scaleType: UNIT_LOG_SCALE - parameterName: nembeds type: INTEGER minValue: 3 maxValue: 30 scaleType: UNIT_LINEAR_SCALE - parameterName: nnsize type: INTEGER minValue: 64 maxValue: 512 scaleType: UNIT_LOG_SCALE
Overwriting hyperparam.yaml
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
In reality, you would hyper-parameter tune over your entire dataset, and not on a smaller subset (see --pattern). But because this is a demo, I wanted it to finish quickly.
%bash OUTDIR=gs://${BUCKET}/babyweight/hyperparam JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/babyweight/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=STANDARD_1 \ --config=hyperparam.yaml \ --runtime-version=1.4 \ -- \ --bucket=${BUCKET} \ --output_dir=${OUTDIR} \ --pattern="00000-of-" --train_examples=5000 %bash gcloud ml-engine jobs describe babyweight_180123_202458
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
Repeat training This time with tuned parameters (note last line)
%bash OUTDIR=gs://${BUCKET}/babyweight/trained_model JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/babyweight/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=STANDARD_1 \ -- \ --bucket=${BUCKET} \ --output_dir=${OUTDIR} \ --train_examples=200000 --batch_size=35 --nembeds=16 --nnsize=281
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/07_structured/5_train.ipynb
varunsimhab/training-data-analyst
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
C4/W4/ungraded_labs/C4_W4_Lab_1_LSTM.ipynb
Mengxue12/tensorflow-1-public
**Note:** This notebook can run using TensorFlow 2.5.0
#!pip install tensorflow==2.5.0 import tensorflow as tf import numpy as np import matplotlib.pyplot as plt print(tf.__version__) def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) def trend(time, slope=0): return slope * time def seasonal_pattern(season_time): """Just an arbitrary pattern, you can change it if you wish""" return np.where(season_time < 0.4, np.cos(season_time * 2 * np.pi), 1 / np.exp(3 * season_time)) def seasonality(time, period, amplitude=1, phase=0): """Repeats the same pattern at each period""" season_time = ((time + phase) % period) / period return amplitude * seasonal_pattern(season_time) def noise(time, noise_level=1, seed=None): rnd = np.random.RandomState(seed) return rnd.randn(len(time)) * noise_level time = np.arange(4 * 365 + 1, dtype="float32") baseline = 10 series = trend(time, 0.1) baseline = 10 amplitude = 40 slope = 0.05 noise_level = 5 # Create the series series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude) # Update with noise series += noise(time, noise_level, seed=42) split_time = 1000 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] window_size = 20 batch_size = 32 shuffle_buffer_size = 1000 def windowed_dataset(series, window_size, batch_size, shuffle_buffer): series = tf.expand_dims(series, axis=-1) ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.shuffle(shuffle_buffer) ds = ds.map(lambda w: (w[:-1], w[1:])) return ds.batch(batch_size).prefetch(1) def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) ds = ds.batch(32).prefetch(1) forecast = model.predict(ds) return forecast tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) window_size = 30 train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer=shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 200) ]) lr_schedule = tf.keras.callbacks.LearningRateScheduler( lambda epoch: 1e-8 * 10**(epoch / 20)) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(train_set, epochs=100, callbacks=[lr_schedule]) plt.semilogx(history.history["lr"], history.history["loss"]) plt.axis([1e-8, 1e-4, 0, 30]) tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) #batch_size = 16 dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=3, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(32, return_sequences=True), tf.keras.layers.LSTM(32, return_sequences=True), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 200) ]) optimizer = tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer=optimizer, metrics=["mae"]) history = model.fit(dataset,epochs=500) rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size) rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0] plt.figure(figsize=(10, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, rnn_forecast) tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy() import matplotlib.image as mpimg import matplotlib.pyplot as plt #----------------------------------------------------------- # Retrieve a list of list results on training and test data # sets for each training epoch #----------------------------------------------------------- mae=history.history['mae'] loss=history.history['loss'] epochs=range(len(loss)) # Get number of epochs #------------------------------------------------ # Plot MAE and Loss #------------------------------------------------ plt.plot(epochs, mae, 'r') plt.plot(epochs, loss, 'b') plt.title('MAE and Loss') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["MAE", "Loss"]) plt.figure() epochs_zoom = epochs[200:] mae_zoom = mae[200:] loss_zoom = loss[200:] #------------------------------------------------ # Plot Zoomed MAE and Loss #------------------------------------------------ plt.plot(epochs_zoom, mae_zoom, 'r') plt.plot(epochs_zoom, loss_zoom, 'b') plt.title('MAE and Loss') plt.xlabel("Epochs") plt.ylabel("Accuracy") plt.legend(["MAE", "Loss"]) plt.figure()
_____no_output_____
Apache-2.0
C4/W4/ungraded_labs/C4_W4_Lab_1_LSTM.ipynb
Mengxue12/tensorflow-1-public
MIDAS ExamplesIf you're reading this you probably already know that MIDAS stands for Mixed Data Sampling, and it is a technique for creating time-series forecast models that allows you to mix series of different frequencies (ie, you can use monthly data as predictors for a quarterly series, or daily data as predictors for a monthly series, etc.). The general approach has been described in a series of papers by Ghysels, Santa-Clara, Valkanov and others.This notebook attempts to recreate some of the examples from the paper [_Forecasting with Mixed Frequencies_](https://research.stlouisfed.org/publications/review/2010/11/01/forecasting-with-mixed-frequencies/) by Michelle T. Armesto, Kristie M. Engemann, and Michael T. Owyang.
%matplotlib inline import datetime import numpy as np import pandas as pd from midas.mix import mix_freq from midas.adl import estimate, forecast, midas_adl, rmse
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
MIDAS ADLThis package currently implements the MIDAS ADL (autoregressive distributed lag) method. We'll start with an example using quarterly GDP and monthly payroll data. We'll then show the basic steps in setting up and fitting this type of model, although in practice you'll probably used the top-level __midas_adl__ function to do forecasts.TODO: MIDAS equation and discussion Example 1: GDP vs Non-Farm Payroll
gdp = pd.read_csv('../tests/data/gdp.csv', parse_dates=['DATE'], index_col='DATE') pay = pd.read_csv('../tests/data/pay.csv', parse_dates=['DATE'], index_col='DATE') gdp.tail() pay.tail()
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
Figure 1This is a variation of Figure 1 from the paper comparing year-over-year growth of GDP and employment.
gdp_yoy = ((1. + (np.log(gdp.GDP) - np.log(gdp.GDP.shift(3)))) ** 4) - 1. emp_yoy = ((1. + (np.log(pay.PAY) - np.log(pay.PAY.shift(1)))) ** 12) - 1. df = pd.concat([gdp_yoy, emp_yoy], axis=1) df.columns = ['gdp_yoy', 'emp_yoy'] df[['gdp_yoy','emp_yoy']].loc['1980-1-1':].plot(figsize=(15,4), style=['o','-'])
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
Mixing FrequenciesThe first step is to do the actual frequency mixing. In this case we're mixing monthly data (employment) with quarterly data (GDP). This may sometimes be useful to do directly, but again you'll probably used __midas_adl__ to do forecasting.
gdp['gdp_growth'] = (np.log(gdp.GDP) - np.log(gdp.GDP.shift(1))) * 100. pay['emp_growth'] = (np.log(pay.PAY) - np.log(pay.PAY.shift(1))) * 100. y, yl, x, yf, ylf, xf = mix_freq(gdp.gdp_growth, pay.emp_growth, "3m", 1, 3, start_date=datetime.datetime(1985,1,1), end_date=datetime.datetime(2009,1,1)) x.head()
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
The arguments here are as follows:- First, the dependent (low frequency) and independent (high-frequency) data are given as Pandas series, and they are assumed to be indexed by date.- xlag The number of lags for the high-frequency variable- ylag The number of lags for the low-frequency variable (the autoregressive part)- horizon: How much the high-frequency data is lagged before frequency mixing- start_date, end_date: The start and end date over which the model is fitted. If these are outside the range of the low-frequency data, they will be adjustedThe _horizon_ argument is a little tricky (the argument name was retained from the MatLab version). This is used both the align the data and to do _nowcasting_ (more on that later). For example, if it's September 2017 then the latest GDP data from FRED will be for Q2 and this will be dated 2017-04-01. The latest monthly data from non-farm payroll will be for August, which will be dated 2017-08-01. If we aligned just on dates, the payroll data for April (04-01), March (03-01), and February(02-01) would be aligned with Q2 (since xlag = "3m"), but what we want is June, May, and April, so here the horizon argument is 3 indicating that the high-frequency data should be lagged three months before being mixed with the quarterly data. Fitting the ModelBecause of the form of the MIDAS model, fitting the model requires using non-linear least squares. For now, if you call the __estimate__ function directly, you'll get back a results of type scipy.optimize.optimize.OptimizeResult
res = estimate(y, yl, x, poly='beta') res.x
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
You can also call __forecast__ directly. This will use the optimization results returned from __eatimate__ to produce a forecast for every date in the index of the forecast inputs (here xf and ylf):
fc = forecast(xf, ylf, res, poly='beta') forecast_df = fc.join(yf) forecast_df['gap'] = forecast_df.yfh - forecast_df.gdp_growth forecast_df gdp.join(fc)[['gdp_growth','yfh']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
Comparison against univariate ARIMA model
import statsmodels.tsa.api as sm m = sm.AR(gdp['1975-01-01':'2011-01-01'].gdp_growth,) r = m.fit(maxlag=1) r.params fc_ar = r.predict(start='2005-01-01') fc_ar.name = 'xx' df_p = gdp.join(fc)[['gdp_growth','yfh']] df_p.join(fc_ar)[['gdp_growth','yfh','xx']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
The midas_adl functionThe __midas\_adl__ function wraps up frequency-mixing, fitting, and forecasting into one process. The default mode of forecasting is _fixed_, which means that the data between start_date and end_date will be used to fit the model, and then any data in the input beyond end_date will be used for forecasting. For example, here we're fitting from the beginning of 1985 to the end of 2008, but the gdp data extends to Q1 of 2011 so we get nine forecast points. Three monthly lags of the high-frequency data are specified along with one quarterly lag of GDP.
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth, start_date=datetime.datetime(1985,1,1), end_date=datetime.datetime(2009,1,1), xlag="3m", ylag=1, horizon=3) rmse_fc
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
You can also change the polynomial used to weight the MIDAS coefficients. The default is 'beta', but you can also specify exponential Almom weighting ('expalmon') or beta with non-zero last term ('betann')
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth, start_date=datetime.datetime(1985,1,1), end_date=datetime.datetime(2009,1,1), xlag="3m", ylag=1, horizon=3, poly='expalmon') rmse_fc
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
Rolling and Recursive ForecastingAs mentioned above the default forecasting method is fixed where the model is fit once and then all data after end_date is used for forecasting. Two other methods are supported _rolling window_ and _recursive_. The _rolling window_ method is just what it sounds like. The start_date and end_date are used for the initial window, and then each new forecast moves that window forward by one period so that you're always doing one step ahead forecasts. Of course, to do anything useful this also assumes that the date range of the dependent data extends beyond end_date accounting for the lags implied by _horizon_. Generally, you'll get lower RMSE values here since the forecasts are always one step ahead.
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth, start_date=datetime.datetime(1985,10,1), end_date=datetime.datetime(2009,1,1), xlag="3m", ylag=1, horizon=3, forecast_horizon=h, poly='beta', method='rolling') for h in (1, 2, 5)} results[1][0]
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
The _recursive_ method is similar except that the start date does not change, so the range over which the fitting happens increases for each new forecast.
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth, start_date=datetime.datetime(1985,10,1), end_date=datetime.datetime(2009,1,1), xlag="3m", ylag=1, horizon=3, forecast_horizon=h, poly='beta', method='recursive') for h in (1, 2, 5)} results[1][0]
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
NowcastingPer the manual for the MatLab Matlab Toolbox Version 1.0, you can do _nowcasting_ (or MIDAS with leads) basically by adjusting the _horizon_ parameter. For example, below we change the _horizon_ paremter to 1, we're now forecasting with a one month horizon rather than a one quarter horizon:
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth, start_date=datetime.datetime(1985,1,1), end_date=datetime.datetime(2009,1,1), xlag="3m", ylag=1, horizon=1) rmse_fc
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
Not surprisingly the RMSE drops considerably. CPI vs. Federal Funds Rate__UNDER CONSTRUCTION: Note that these models take considerably longer to fit__
cpi = pd.read_csv('CPIAUCSL.csv', parse_dates=['DATE'], index_col='DATE') ffr = pd.read_csv('DFF_2_Vintages_Starting_2009_09_28.txt', sep='\t', parse_dates=['observation_date'], index_col='observation_date') cpi.head() ffr.head(10) cpi_yoy = ((1. + (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1)))) ** 12) - 1. cpi_yoy.head() df = pd.concat([cpi_yoy, ffr.DFF_20090928 / 100.], axis=1) df.columns = ['cpi_growth', 'dff'] df.loc['1980-1-1':'2010-1-1'].plot(figsize=(15,4), style=['-+','-.']) cpi_growth = (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1))) * 100. y, yl, x, yf, ylf, xf = mix_freq(cpi_growth, ffr.DFF_20090928, "1m", 1, 1, start_date=datetime.datetime(1975,10,1), end_date=datetime.datetime(1991,1,1)) x.head() res = estimate(y, yl, x) fc = forecast(xf, ylf, res) fc.join(yf).head() pd.concat([cpi_growth, fc],axis=1).loc['2008-01-01':'2010-01-01'].plot(style=['-o','-+'], figsize=(12, 4)) results = {h: midas_adl(cpi_growth, ffr.DFF_20090928, start_date=datetime.datetime(1975,7,1), end_date=datetime.datetime(1990,11,1), xlag="1m", ylag=1, horizon=1, forecast_horizon=h, method='rolling') for h in (1, 2, 5)} (results[1][0], results[2][0], results[5][0]) results[1][1].plot(figsize=(12,4)) results = {h: midas_adl(cpi_growth, ffr.DFF_20090928, start_date=datetime.datetime(1975,10,1), end_date=datetime.datetime(1991,1,1), xlag="1m", ylag=1, horizon=1, forecast_horizon=h, method='recursive') for h in (1, 2, 5)} results[1][0] results[1][1].plot()
_____no_output_____
MIT
examples/MIDASExamples.ipynb
QAQ-Ahuahuahuahua/midaspy-1
OpenAI GymWe're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.That's where OpenAI gym comes into play. It's a python library that wraps many classical decision problems including robot control, videogames and board games.So here's how it works:
import gym env = gym.make("MountainCar-v0") env.reset() plt.imshow(env.render('rgb_array')) print("Observation space:", env.observation_space) print("Action space:", env.action_space)
Observation space: Box(2,) Action space: Discrete(3)
MIT
Week1_Intro/gym_interface.ipynb
shih-chi-47/Practical_RL
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away. Gym interfaceThe three main methods of an environment are* __reset()__ - reset environment to initial state, _return first observation_* __render()__ - show current environment state (a more colorful version :) )* __step(a)__ - commit action __a__ and return (new observation, reward, is done, info) * _new observation_ - an observation right after commiting the action __a__ * _reward_ - a number representing your reward for commiting action __a__ * _is done_ - True if the MDP has just finished, False if still in progress * _info_ - some auxilary stuff about what just happened. Ignore it ~~for now~~.
obs0 = env.reset() print("initial observation code:", obs0) # Note: in MountainCar, observation is just two numbers: car position and velocity print("taking action 2 (right)") new_obs, reward, is_done, _ = env.step(2) print("new observation code:", new_obs) print("reward:", reward) print("is game over?:", is_done) # Note: as you can see, the car has moved to the right slightly (around 0.0005)
taking action 2 (right) new observation code: [-0.50978891 0.00090322] reward: -1.0 is game over?: False
MIT
Week1_Intro/gym_interface.ipynb
shih-chi-47/Practical_RL
Play with itBelow is the code that drives the car to the right. However, if you simply use the default policy, the car will not reach the flag at the far right due to gravity.__Your task__ is to fix it. Find a strategy that reaches the flag. You are not required to build any sophisticated algorithms for now, feel free to hard-code :)
from IPython import display # Create env manually to set time limit. Please don't change this. TIME_LIMIT = 250 env = gym.wrappers.TimeLimit( gym.envs.classic_control.MountainCarEnv(), max_episode_steps=TIME_LIMIT + 1, ) actions = {'left': 0, 'stop': 1, 'right': 2} def policy(obs, t): # Write the code for your policy here. You can use the observation # (a tuple of position and velocity), the current time step, or both, # if you want. position, velocity = obs if velocity > 0: a = actions['right'] else: a = actions['left'] # This is an example policy. You can try running it, but it will not work. # Your goal is to fix that. return a plt.figure(figsize=(4, 3)) display.clear_output(wait=True) obs = env.reset() for t in range(TIME_LIMIT): plt.gca().clear() action = policy(obs, t) # Call your policy obs, reward, done, _ = env.step(action) # Pass the action chosen by the policy to the environment # We don't do anything with reward here because MountainCar is a very simple environment, # and reward is a constant -1. Therefore, your goal is to end the episode as quickly as possible. # Draw game image on display. plt.imshow(env.render('rgb_array')) display.clear_output(wait=True) display.display(plt.gcf()) print(obs) if done: print("Well done!") break else: print("Time limit exceeded. Try again.") display.clear_output(wait=True) from submit import submit_interface submit_interface(policy, <EMAIL>, <TOKEN>)
_____no_output_____
MIT
Week1_Intro/gym_interface.ipynb
shih-chi-47/Practical_RL
![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb) Colab Setup
import json from google.colab import files license_keys = files.upload() with open(list(license_keys.keys())[0]) as f: license_keys = json.load(f) license_keys['JSL_VERSION'] %%capture for k,v in license_keys.items(): %set_env $k=$v !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh !bash jsl_colab_setup.sh import json import os from pyspark.ml import Pipeline, PipelineModel from pyspark.sql import SparkSession from sparknlp.annotator import * from sparknlp_jsl.annotator import * from sparknlp.base import * import sparknlp_jsl import sparknlp from sparknlp.util import * from sparknlp.pretrained import ResourceDownloader from pyspark.sql import functions as F import pandas as pd spark = sparknlp_jsl.start(license_keys['SECRET']) spark
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb
Rock-ass/spark-nlp-workshop
**Date Normalizer**New Annotator that transforms chunks Dates to a normalized Date with format YYYY/MM/DD. This annotator identifies dates in chunk annotations and transforms those dates to the format YYYY/MM/DD. We going to create a chunks dates with different formats:
dates = [ '08/02/2018', '11/2018', '11/01/2018', '12Mar2021', 'Jan 30, 2018', '13.04.1999', '3April 2020', 'next monday', 'today', 'next week' ] from pyspark.sql.types import StringType df_dates = spark.createDataFrame(dates,StringType()).toDF('ner_chunk')
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb
Rock-ass/spark-nlp-workshop
We going to transform that text to documents in spark-nlp.
document_assembler = DocumentAssembler().setInputCol('ner_chunk').setOutputCol('document') documents_DF = document_assembler.transform(df_dates)
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb
Rock-ass/spark-nlp-workshop
After that we going to transform that documents to chunks.
from sparknlp.functions import map_annotations_col chunks_df = map_annotations_col(documents_DF.select("document","ner_chunk"), lambda x: [Annotation('chunk', a.begin, a.end, a.result, a.metadata, a.embeddings) for a in x], "document", "chunk_date", "chunk") chunks_df.select('chunk_date').show(truncate=False)
+---------------------------------------------------+ |chunk_date | +---------------------------------------------------+ |[{chunk, 0, 9, 08/02/2018, {sentence -> 0}, []}] | |[{chunk, 0, 6, 11/2018, {sentence -> 0}, []}] | |[{chunk, 0, 9, 11/01/2018, {sentence -> 0}, []}] | |[{chunk, 0, 8, 12Mar2021, {sentence -> 0}, []}] | |[{chunk, 0, 11, Jan 30, 2018, {sentence -> 0}, []}]| |[{chunk, 0, 9, 13.04.1999, {sentence -> 0}, []}] | |[{chunk, 0, 10, 3April 2020, {sentence -> 0}, []}] | |[{chunk, 0, 10, next monday, {sentence -> 0}, []}] | |[{chunk, 0, 4, today, {sentence -> 0}, []}] | |[{chunk, 0, 8, next week, {sentence -> 0}, []}] | +---------------------------------------------------+
Apache-2.0
tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb
Rock-ass/spark-nlp-workshop
Now we going to normalize that chunks using the DateNormalizer.
date_normalizer = DateNormalizer().setInputCols('chunk_date').setOutputCol('date') date_normaliced_df = date_normalizer.transform(chunks_df)
_____no_output_____
Apache-2.0
tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb
Rock-ass/spark-nlp-workshop
We going to show how the date is normalized.
dateNormalizedClean = date_normaliced_df.selectExpr("ner_chunk","date.result as dateresult","date.metadata as metadata") dateNormalizedClean.withColumn("dateresult", dateNormalizedClean["dateresult"] .getItem(0)).withColumn("metadata", dateNormalizedClean["metadata"] .getItem(0)['normalized']).show(truncate=False)
+------------+----------+--------+ |ner_chunk |dateresult|metadata| +------------+----------+--------+ |08/02/2018 |2018/08/02|true | |11/2018 |2018/11/DD|true | |11/01/2018 |2018/11/01|true | |12Mar2021 |2021/03/12|true | |Jan 30, 2018|2018/01/30|true | |13.04.1999 |1999/04/13|true | |3April 2020 |2020/04/03|true | |next monday |2021/06/19|true | |today |2021/06/13|true | |next week |2021/06/20|true | +------------+----------+--------+
Apache-2.0
tutorials/Certification_Trainings/Healthcare/25.Date_Normalizer.ipynb
Rock-ass/spark-nlp-workshop