markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Lab 1: Markov Decision Processes - Problem 3 Lab InstructionsAll your answers should be written in this notebook. You shouldn't need to write or modify any other files.**You should execute every block of code to not miss any dependency.***This project was developed by Peter Chen, Rocky Duan, Pieter Abbeel for the Berkeley Deep RL Bootcamp, August 2017. Bootcamp website with slides and lecture videos: https://sites.google.com/view/deep-rl-bootcamp/. It is adapted from CS188 project materials: http://ai.berkeley.edu/project_overview.html.*-------------------------- | import numpy as np, numpy.random as nr, gym
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(precision=3) | _____no_output_____ | MIT | Lab 1/ Problem 3 genouadr.ipynb | AdrianGScorp/deeprlbootcamp |
Problem 3: Sampling-based Tabular Q-LearningSo far we have implemented Value Iteration and Policy Iteration, both of which require access to an MDP's dynamics model. This requirement can sometimes be restrictive - for example, if the environment is given as a blackbox physics simulator, then we won't be able to read off the whole transition model.We can however use sampling-based Q-Learning to learn from this type of environments. For this exercise, we will learn to control a Crawler robot. Let's first try some completely random actions to see how the robot moves and familiarize ourselves with Gym environment interface again. | from crawler_env import CrawlingRobotEnv
env = CrawlingRobotEnv()
print("We can inspect the observation space and action space of this Gym Environment")
print("-----------------------------------------------------------------------------")
print("Action space:", env.action_space)
print("It's a discrete space with %i actions to take" % env.action_space.n)
print("Each action corresponds to increasing/decreasing the angle of one of the joints")
print("We can also sample from this action space:", env.action_space.sample())
print("Another action sample:", env.action_space.sample())
print("Another action sample:", env.action_space.sample())
print("Observation space:", env.observation_space, ", which means it's a 9x13 grid.")
print("It's the discretized version of the robot's two joint angles")
env = CrawlingRobotEnv(
render=True, # turn render mode on to visualize random motion
)
# standard procedure for interfacing with a Gym environment
cur_state = env.reset() # reset environment and get initial state
ret = 0.
done = False
i = 0
while not done:
action = env.action_space.sample() # sample an action randomly
next_state, reward, done, info = env.step(action)
ret += reward
cur_state = next_state
i += 1
if i == 1500:
break # for the purpose of this visualization, let's only run for 1500 steps
# also note the GUI won't close automatically
# you can close the visualization GUI with the following method
env.close_gui() | _____no_output_____ | MIT | Lab 1/ Problem 3 genouadr.ipynb | AdrianGScorp/deeprlbootcamp |
You will see the random controller can sometimes make progress but it won't get very far. Let's implement Tabular Q-Learning with $\epsilon$-greedy exploration to find a better policy piece by piece. | from collections import defaultdict
import random
# dictionary that maps from state, s, to a numpy array of Q values [Q(s, a_1), Q(s, a_2) ... Q(s, a_n)]
# and everything is initialized to 0.
q_vals = defaultdict(lambda: np.array([0. for _ in range(env.action_space.n)]))
print("Q-values for state (0, 0): %s" % q_vals[(0, 0)], "which is a list of Q values for each action")
print("As such, the Q value of taking action 3 in state (1,2), i.e. Q((1,2), 3), can be accessed by q_vals[(1,2)][3]:", q_vals[(1,2)][3])
def eps_greedy(q_vals, eps, state):
"""
Inputs:
q_vals: q value tables
eps: epsilon
state: current state
Outputs:
random action with probability of eps; argmax Q(s, .) with probability of (1-eps)
"""
# you might want to use random.random() to implement random exploration
# number of actions can be read off from len(q_vals[state])
import random
# YOUR CODE HERE
# MY CODE -------------------------------------------------------------------
if random.random() <= eps:
return np.random.randint(0, len(q_vals[state]))
return np.argmax(q_vals[state])
#----------------------------------------------------------------------------
# test 1
dummy_q = defaultdict(lambda: np.array([0. for _ in range(env.action_space.n)]))
test_state = (0, 0)
dummy_q[test_state][0] = 10.
trials = 100000
sampled_actions = [
int(eps_greedy(dummy_q, 0.3, test_state))
for _ in range(trials)
]
freq = np.sum(np.array(sampled_actions) == 0) / trials
tgt_freq = 0.3 / env.action_space.n + 0.7
if np.isclose(freq, tgt_freq, atol=1e-2):
print("Test1 passed")
else:
print("Test1: Expected to select 0 with frequency %.2f but got %.2f" % (tgt_freq, freq))
# test 2
dummy_q = defaultdict(lambda: np.array([0. for _ in range(env.action_space.n)]))
test_state = (0, 0)
dummy_q[test_state][2] = 10.
trials = 100000
sampled_actions = [
int(eps_greedy(dummy_q, 0.5, test_state))
for _ in range(trials)
]
freq = np.sum(np.array(sampled_actions) == 2) / trials
tgt_freq = 0.5 / env.action_space.n + 0.5
if np.isclose(freq, tgt_freq, atol=1e-2):
print("Test2 passed")
else:
print("Test2: Expected to select 2 with frequency %.2f but got %.2f" % (tgt_freq, freq)) | Test1 passed
Test2 passed
| MIT | Lab 1/ Problem 3 genouadr.ipynb | AdrianGScorp/deeprlbootcamp |
Next we will implement Q learning update. After we observe a transition $s, a, s', r$,$$\textrm{target}(s') = R(s,a,s') + \gamma \max_{a'} Q_{\theta_k}(s',a')$$$$Q_{k+1}(s,a) \leftarrow (1-\alpha) Q_k(s,a) + \alpha \left[ \textrm{target}(s') \right]$$ | def q_learning_update(gamma, alpha, q_vals, cur_state, action, next_state, reward):
"""
Inputs:
gamma: discount factor
alpha: learning rate
q_vals: q value table
cur_state: current state
action: action taken in current state
next_state: next state results from taking `action` in `cur_state`
reward: reward received from this transition
Performs in-place update of q_vals table to implement one step of Q-learning
"""
# YOUR CODE HERE
# MY CODE -------------------------------------------------------------------
target = reward + gamma*np.max(q_vals[next_state])
q_vals[cur_state][action] -= alpha*(q_vals[cur_state][action] - target)
#----------------------------------------------------------------------------
# testing your q_learning_update implementation
dummy_q = q_vals.copy()
test_state = (0, 0)
test_next_state = (0, 1)
dummy_q[test_state][0] = 10.
dummy_q[test_next_state][1] = 10.
q_learning_update(0.9, 0.1, dummy_q, test_state, 0, test_next_state, 1.1)
tgt = 10.01
if np.isclose(dummy_q[test_state][0], tgt,):
print("Test passed")
else:
print("Q(test_state, 0) is expected to be %.2f but got %.2f" % (tgt, dummy_q[test_state][0]))
# now with the main components tested, we can put everything together to create a complete q learning agent
env = CrawlingRobotEnv()
q_vals = defaultdict(lambda: np.array([0. for _ in range(env.action_space.n)]))
gamma = 0.9
alpha = 0.1
eps = 0.5
cur_state = env.reset()
def greedy_eval():
"""evaluate greedy policy w.r.t current q_vals"""
test_env = CrawlingRobotEnv(horizon=np.inf)
prev_state = test_env.reset()
ret = 0.
done = False
H = 100
for i in range(H):
action = np.argmax(q_vals[prev_state])
state, reward, done, info = test_env.step(action)
ret += reward
prev_state = state
return ret / H
for itr in range(300000):
# YOUR CODE HERE
# Hint: use eps_greedy & q_learning_update
# MY CODE --------------------------------------------------------------------
action = eps_greedy(q_vals, eps, cur_state)
next_state, reward, done, info = env.step(action)
q_learning_update(gamma, alpha, q_vals, cur_state, action, next_state, reward)
cur_state = next_state
#-----------------------------------------------------------------------------
if itr % 50000 == 0: # evaluation
print("Itr %i # Average speed: %.2f" % (itr, greedy_eval()))
# at the end of learning your crawler should reach a speed of >= 3 | Itr 0 # Average speed: 0.05
Itr 50000 # Average speed: 2.03
Itr 100000 # Average speed: 3.37
Itr 150000 # Average speed: 3.37
Itr 200000 # Average speed: 3.37
Itr 250000 # Average speed: 3.37
| MIT | Lab 1/ Problem 3 genouadr.ipynb | AdrianGScorp/deeprlbootcamp |
After the learning is successful, we can visualize the learned robot controller. Remember we learn this just from interacting with the environment instead of peeking into the dynamics model! | env = CrawlingRobotEnv(render=True, horizon=500)
prev_state = env.reset()
ret = 0.
done = False
while not done:
action = np.argmax(q_vals[prev_state])
state, reward, done, info = env.step(action)
ret += reward
prev_state = state
# you can close the visualization GUI with the following method
env.close_gui() | _____no_output_____ | MIT | Lab 1/ Problem 3 genouadr.ipynb | AdrianGScorp/deeprlbootcamp |
Exercise 6-3 LSTMThe following two cells will create a LSTM cell with one neuron.We scale the output of the LSTM linear and add a bias.Then the output will be wrapped by a sigmoid activation.The goal is to predict a time series where every $n^{th}$ ($5^{th}$ in the current example) element is 1 and all others are 0.a) Please read and understand the source code below.b) Consult the output of the predictions. What do you observe? How does the LSTM manage to predict the next element in the sequence? | import tensorflow as tf
import numpy as np
from matplotlib import pyplot as plt
tf.reset_default_graph()
tf.set_random_seed(12314)
epochs=50
zero_steps = 5
learning_rate = 0.01
lstm_neurons = 1
out_dim = 1
num_features = 1
batch_size = zero_steps
window_size = zero_steps*2
time_steps = 5
x = tf.placeholder(tf.float32, [None, window_size, num_features], 'x')
y = tf.placeholder(tf.float32, [None, out_dim], 'y')
lstm = tf.nn.rnn_cell.LSTMCell(lstm_neurons)
state = lstm.zero_state(batch_size, dtype=tf.float32)
regression_w = tf.Variable(tf.random_normal([lstm_neurons]))
regression_b = tf.Variable(tf.random_normal([out_dim]))
outputs, state = tf.contrib.rnn.static_rnn(lstm, tf.unstack(x, window_size, 1), state)
output = outputs[-1]
predicted = tf.nn.sigmoid(output * regression_w + regression_b)
cost = tf.reduce_mean(tf.losses.mean_squared_error(y, predicted))
optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost)
forget_gate = output.op.inputs[1].op.inputs[0].op.inputs[0].op.inputs[0]
input_gate = output.op.inputs[1].op.inputs[0].op.inputs[1].op.inputs[0]
cell_candidates = output.op.inputs[1].op.inputs[0].op.inputs[1].op.inputs[1]
output_gate_sig = output.op.inputs[0]
output_gate_tanh = output.op.inputs[1]
X = [
[[ (shift-n) % zero_steps == 0 ] for n in range(window_size)
] for shift in range(batch_size)
]
Y = [[ shift % zero_steps == 0 ] for shift in range(batch_size) ]
with tf.Session() as sess:
sess.run(tf.initializers.global_variables())
loss = 1
epoch = 0
while loss >= 1e-5:
epoch += 1
_, loss = sess.run([optimizer, cost], {x:X, y:Y})
if epoch % (epochs//10) == 0:
print("loss %.5f" % (loss), end='\t\t\r')
print()
outs, stat, pred, fg, inpg, cell_cands, outg_sig, outg_tanh = sess.run([outputs, state, predicted, forget_gate, input_gate, cell_candidates, output_gate_sig, output_gate_tanh], {x:X, y:Y})
outs = np.asarray(outs)
for batch in reversed(range(batch_size)):
print("input:")
print(np.asarray(X)[batch].astype(int).reshape(-1))
print("forget\t\t%.4f\ninput gate\t%.4f\ncell cands\t%.4f\nout gate sig\t%.4f\nout gate tanh\t%.4f\nhidden state\t%.4f\ncell state\t%.4f\npred\t\t%.4f\n\n" % (
fg[batch,0],
inpg[batch,0],
cell_cands[batch,0],
outg_sig[batch,0],
outg_tanh[batch,0],
stat.h[batch,0],
stat.c[batch,0],
pred[batch,0])) | loss 0.00001
input:
[0 0 0 0 1 0 0 0 0 1]
forget 0.0135
input gate 0.9997
cell cands 0.9994
out gate sig 1.0000
out gate tanh 0.7586
hidden state 0.7586
cell state 0.9928
pred 0.0000
input:
[0 0 0 1 0 0 0 0 1 0]
forget 0.8747
input gate 0.9860
cell cands -0.0476
out gate sig 1.0000
out gate tanh 0.6759
hidden state 0.6759
cell state 0.8215
pred 0.0000
input:
[0 0 1 0 0 0 0 1 0 0]
forget 0.8504
input gate 0.9877
cell cands -0.1311
out gate sig 0.9999
out gate tanh 0.5148
hidden state 0.5147
cell state 0.5692
pred 0.0000
input:
[0 1 0 0 0 0 1 0 0 0]
forget 0.7921
input gate 0.9903
cell cands -0.2875
out gate sig 0.9999
out gate tanh 0.1646
hidden state 0.1646
cell state 0.1661
pred 0.0052
input:
[1 0 0 0 0 1 0 0 0 0]
forget 0.6150
input gate 0.9943
cell cands -0.5732
out gate sig 0.9997
out gate tanh -0.4363
hidden state -0.4361
cell state -0.4676
pred 0.9953
| MIT | week6/lstm.ipynb | changkun/ws-18-19-deep-learning-tutorial |
2020L-WUM Praca domowa 2 Kod: **Bartłomiej Eljasiak** Załadowanie bibliotek Z tych bibliotek będziemy korzystać w wielu miejscach, jednak w niektórych fragmentach kodu znajdą się dodatkowe importowania, lecz w takich sytuacjach użytek załadowanej biblioteki jest ograniczony do 'chunku', w którym została załadowana. | import pandas as pd
import seaborn as sns
import numpy as np
import sklearn | _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Wczytanie danych | # local version
_data=pd.read_csv('allegro-api-transactions.csv')
#online version
#_data = pd.read_csv('https://www.dropbox.com/s/360xhh2d9lnaek3/allegro-api-transactions.csv?dl=1')
cdata=_data.copy() | _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Przyjrzenie się danym | cdata.head() | _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Obróbka danych | len(cdata.it_location.unique())
cdata.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 420020 entries, 0 to 420019
Data columns (total 14 columns):
lp 420020 non-null int64
date 420020 non-null object
item_id 420020 non-null int64
categories 420020 non-null object
pay_option_on_delivery 420020 non-null int64
pay_option_transfer 420020 non-null int64
seller 420020 non-null object
price 420020 non-null float64
it_is_allegro_standard 420020 non-null int64
it_quantity 420020 non-null int64
it_is_brand_zone 420020 non-null int64
it_seller_rating 420020 non-null int64
it_location 420020 non-null object
main_category 420020 non-null object
dtypes: float64(1), int64(8), object(5)
memory usage: 36.9+ MB
| Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Na pierwszy rzut oka nie mamy żadnych braków w danych, co znacznie ułatwia nam prace. Kodowanie zmiennych kategorycznych Wiemy, że naszym targetem będzie `price`, chcemy więc w tym akapicie zamienić wszystkie zmienne kategoryczne, z których w dalszej części kodu będziemy korzystać, na zmienne liczbowe. Skuteczna zamiana przy użyciu odpowiednich metod takich jak **target encoding** oraz **on-hot encoding** pozwoli nam przekształcić obecne informacje, tak byśmy mogli je wykorzystać przy operacjach matematycznych. Pominiemy jednak w naszych przekształceniach kolumnę `categories`. Target encoding dla `it_location` | import category_encoders
y=cdata.price
te = category_encoders.target_encoder.TargetEncoder(cdata.it_location, smoothing=100)
encoded = te.fit_transform(cdata.it_location,y)
encoded | _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Różne rodzaje zakodowania kolumny `main_category` One-hot Coding | from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
# integer encode
le = LabelEncoder()
integer_encoded = le.fit_transform(cdata.main_category)
print(integer_encoded)
# binary encode
onehot_encoder = OneHotEncoder(categories='auto',sparse=False)
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
print(onehot_encoded) | [12 18 6 ... 18 5 15]
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
| Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Zamieniliśmy w ten sposób kolumnę kategoryczną na 26 kolumn o wartościach 0 lub 1. Nie jest to złe rozwiązanie, jednak stanowczo zwiększa rozmiar naszej ramki i wydłża czas uczenia się modelu. Prawdopodobnie może istnieć lepsze rozwiązanie. Helmert CodingDocumentation: [scikit-learn](http://contrib.scikit-learn.org/categorical-encoding/helmert.html) | # Pobieramy go z category_encoders
helmert_encoder = category_encoders.HelmertEncoder()
helmert_encoded = helmert_encoder.fit_transform(cdata.main_category)
#showing only first 5 encoded rows
print(helmert_encoded.loc[1:5,:].transpose()) | 1 2 3 4 5
intercept 1.0 1.0 1.0 1.0 1.0
main_category_0 1.0 0.0 0.0 1.0 1.0
main_category_1 -1.0 2.0 0.0 -1.0 -1.0
main_category_2 -1.0 -1.0 3.0 -1.0 -1.0
main_category_3 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_4 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_5 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_6 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_7 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_8 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_9 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_10 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_11 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_12 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_13 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_14 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_15 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_16 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_17 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_18 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_19 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_20 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_21 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_22 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_23 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_24 -1.0 -1.0 -1.0 -1.0 -1.0
main_category_25 -1.0 -1.0 -1.0 -1.0 -1.0
| Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Backward Difference CodingDocumentation: [scikit-learn](http://contrib.scikit-learn.org/categorical-encoding/backward_difference.htmlbackward-difference-coding) | # Pobieramy go z category_encoders
back_diff_encoder = category_encoders.BackwardDifferenceEncoder()
back_diff_encoded = back_diff_encoder.fit_transform(cdata.main_category)
#showing only first 5 encoded rows
print(back_diff_encoded.loc[1:5,:].transpose()) | 1 2 3 4 5
intercept 1.000000 1.000000 1.000000 1.000000 1.000000
main_category_0 0.037037 0.037037 0.037037 0.037037 0.037037
main_category_1 -0.925926 0.074074 0.074074 -0.925926 -0.925926
main_category_2 -0.888889 -0.888889 0.111111 -0.888889 -0.888889
main_category_3 -0.851852 -0.851852 -0.851852 -0.851852 -0.851852
main_category_4 -0.814815 -0.814815 -0.814815 -0.814815 -0.814815
main_category_5 -0.777778 -0.777778 -0.777778 -0.777778 -0.777778
main_category_6 -0.740741 -0.740741 -0.740741 -0.740741 -0.740741
main_category_7 -0.703704 -0.703704 -0.703704 -0.703704 -0.703704
main_category_8 -0.666667 -0.666667 -0.666667 -0.666667 -0.666667
main_category_9 -0.629630 -0.629630 -0.629630 -0.629630 -0.629630
main_category_10 -0.592593 -0.592593 -0.592593 -0.592593 -0.592593
main_category_11 -0.555556 -0.555556 -0.555556 -0.555556 -0.555556
main_category_12 -0.518519 -0.518519 -0.518519 -0.518519 -0.518519
main_category_13 -0.481481 -0.481481 -0.481481 -0.481481 -0.481481
main_category_14 -0.444444 -0.444444 -0.444444 -0.444444 -0.444444
main_category_15 -0.407407 -0.407407 -0.407407 -0.407407 -0.407407
main_category_16 -0.370370 -0.370370 -0.370370 -0.370370 -0.370370
main_category_17 -0.333333 -0.333333 -0.333333 -0.333333 -0.333333
main_category_18 -0.296296 -0.296296 -0.296296 -0.296296 -0.296296
main_category_19 -0.259259 -0.259259 -0.259259 -0.259259 -0.259259
main_category_20 -0.222222 -0.222222 -0.222222 -0.222222 -0.222222
main_category_21 -0.185185 -0.185185 -0.185185 -0.185185 -0.185185
main_category_22 -0.148148 -0.148148 -0.148148 -0.148148 -0.148148
main_category_23 -0.111111 -0.111111 -0.111111 -0.111111 -0.111111
main_category_24 -0.074074 -0.074074 -0.074074 -0.074074 -0.074074
main_category_25 -0.037037 -0.037037 -0.037037 -0.037037 -0.037037
| Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Uzupełnianie braków Wybranie danych z ramkiDalej będziemy pracowac tylko na 3 kolumnach danych, ograniczę więc je dla przejżystości poczynań. | data_selected= cdata.loc[:,['price','it_seller_rating','it_quantity']]
data_selected.head() | _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Usunięcie danych z kolumny Dane z kolumny będziemy usuwać funkcją `df.column.sample(frac)` gdzie `frac` będzie oznaczać procent danych, które chcemy zatrzymać. Gwarantuje nam to w miarę losowe usunięcie danych, które powinno być wystarczające do dalszych działań. | cdata.price.sample(frac=0.9)
| _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Ocena skutecznosci imputacjiDo oceny skuteczności podanych algorytmów imputacji danych musimy przyjąć jakis sposób liczenia ich. Zgodnie z sugestią prowadzącej skorszystam z [RMSE](https://en.wikipedia.org/wiki/Root_mean_square) czyli root mean square error. Nazwa powinna przybliżyć sposób, którm RMSE jest wyznaczane, jednak ciekawskich zachęcam do wejścia w link. Imputacja danychNapiszmy więc funkcje, która pozwoli nam stestować wybrany sposób imputacji. | from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.metrics import mean_squared_error
def test_imputation(imputer,iterations=10):
_resoults=[]
# we always use the same data, so it's taken globally
for i in range(iterations):
test_data = data_selected.copy()
test_data.it_seller_rating = test_data.it_seller_rating.sample(frac = 0.9)
data_imputed = pd.DataFrame(imputer.fit_transform(test_data))
_resoults.append(np.sqrt(mean_squared_error(data_selected,data_imputed)))
return _resoults | _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
I to niech będzie przykład działania takiej funkcji | imputer = IterativeImputer(max_iter=10,random_state=0)
RMSE_list = test_imputation(imputer,20)
print("Średnie RMSE wynosi", round(np.mean(RMSE_list)))
print('Odchylenie standardowe RMSE wynosi: ', round(np.std(RMSE_list)))
RMSE_list | Średnie RMSE wynosi 6659.0
Odchylenie standardowe RMSE wynosi: 64.0
| Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Odchylenie standardowe jest dosyć małe, więc metodę imputacji uważam za skuteczną. Chętnym polecam przetestowanie tej funkcji dla innych typów imputacji oraz zmiennej liczbie iteracji. Usuwanie danych z wielu kolumn Powtórzmy uprzedni przykład dodając drobną modyfikacje. Tym razem będziemy usuwali dane zarówno z `it_seller_rating` jak i `it_quantity`. Napiszmy do tego odpowiednią funkcję i zobaczmy wyniki. | def test_imputation2(imputer,iterations=10):
_resoults=[]
# we always use the same data, so it's taken globally
for i in range(iterations):
test_data = data_selected.copy()
test_data.it_seller_rating = test_data.it_seller_rating.sample(frac = 0.9)
test_data.it_quantity = test_data.it_quantity.sample(frac = 0.9)
data_imputed = pd.DataFrame(imputer.fit_transform(test_data))
_resoults.append(np.sqrt(mean_squared_error(data_selected,data_imputed)))
return _resoults
imputer = IterativeImputer(max_iter=10,random_state=0)
RMSE_list = test_imputation2(imputer,20)
print("Średnie RMSE wynosi", round(np.mean(RMSE_list)))
print('Odchylenie standardowe RMSE wynosi: ', round(np.std(RMSE_list)))
RMSE_list | Średnie RMSE wynosi 7953.0
Odchylenie standardowe RMSE wynosi: 60.0
| Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Tak jak moglibyśmy się spodziewać, średni błąd jest większy w przypadku gdy usuneliśmy więcej danych. Ponownie zachęcam do powtórzenia obliczeń i sprawdzenia wyników. Spojrzenie na imputacje typu `IterativeImputer` Wykorzystałem pewien szczególny sposób imputacji danych tzn `IterativeImputer`, o którym nie wspomniałem za dużo podczas korzystania z niego. Chciałbym jednka w tym miejscu go bardziej szczegółowo przedstawić oraz zobaczyć jak liczba iteracji wpływa na jakość imputacji. Nasz imputer będę chiał przetestować dla całego spektrum wartości `max_iter` i dokładnie to zrobię w poniższej pętli. **uwaga poniższy kod wykonuję się dosyć długo** | upper_iter_limit = 30
lower_iter_limit = 5
imputation_iterations = 10
mean_RMSE ={
"single": [],
"multi": [],
}
for imputer_iterations in range(lower_iter_limit,upper_iter_limit,2):
_resoults_single = []
_resoults_multi = []
imputer = IterativeImputer(max_iter=imputer_iterations,random_state=0)
print("max_iter: ", imputer_iterations, "/",upper_iter_limit)
# Data missing from single columns
_resoults_multi.append(test_imputation(imputer,imputation_iterations))
# Data missing from multiple column
_resoults_single.append(test_imputation2(imputer,imputation_iterations))
mean_RMSE['single'].append(np.mean(_resoults_single))
mean_RMSE['multi'].append(np.mean(_resoults_multi))
| max_iter: 5 / 30
max_iter: 7 / 30
max_iter: 9 / 30
max_iter: 11 / 30
max_iter: 13 / 30
max_iter: 15 / 30
max_iter: 17 / 30
max_iter: 19 / 30
max_iter: 21 / 30
max_iter: 23 / 30
max_iter: 25 / 30
max_iter: 27 / 30
max_iter: 29 / 30
| Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Przyjrzyjmy się wynikom. | mean_RMSE
| _____no_output_____ | Apache-2.0 | Prace_domowe/Praca_domowa2/Grupa1/EljasiakBartlomiej/main.ipynb | niladrem/2020L-WUM |
Classifying Fashion-MNISTNow it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.First off, let's load the dataset through torchvision. | import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True) | _____no_output_____ | MIT | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | GeoffKriston/deep-learning-v2-pytorch |
Here we can see one of the images. | image, label = next(iter(trainloader))
helper.imshow(image[0,:]); | _____no_output_____ | MIT | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | GeoffKriston/deep-learning-v2-pytorch |
Building the networkHere you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers. | # TODO: Define your network architecture here
from torch import nn
n_hidden1 = 1024
n_hidden2 = 128
n_hidden3 = 64
model = nn.Sequential(nn.Linear(784,n_hidden1),
nn.ReLU(),
nn.Linear(n_hidden1,n_hidden2),
nn.ReLU(),
nn.Linear(n_hidden2,n_hidden3),
nn.ReLU(),
nn.Linear(n_hidden3, 10),
nn.LogSoftmax(dim=1)) | _____no_output_____ | MIT | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | GeoffKriston/deep-learning-v2-pytorch |
Train the networkNow you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.htmlloss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).Then write the training code. Remember the training pass is a fairly straightforward process:* Make a forward pass through the network to get the logits * Use the logits to calculate the loss* Perform a backward pass through the network with `loss.backward()` to calculate the gradients* Take a step with the optimizer to update the weightsBy adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4. | # TODO: Create the network, define the criterion and optimizer
from torch import optim
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(),lr=0.007)
# TODO: Train the network here
data = iter(trainloader)
epochs = 10
for e in range(epochs):
running_loss = 0
for images,labels in trainloader:
input=images.view(images.shape[0],784)
optimizer.zero_grad()
output = model(input)
loss = criterion(output,labels)
loss.backward()
optimizer.step()
running_loss+=loss.item()
else:
print "Epoch: ", e, "Training loss: ", running_loss/len(trainloader)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
# TODO: Calculate the class probabilities (softmax) for img
with torch.no_grad():
logprobs = model(img)
ps = torch.exp(logprobs)
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion') | _____no_output_____ | MIT | intro-to-pytorch/Part 4 - Fashion-MNIST (Exercises).ipynb | GeoffKriston/deep-learning-v2-pytorch |
differential_privacy.analysis.rdp_accountant | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""RDP analysis of the Sampled Gaussian Mechanism.
Functionality for computing Renyi differential privacy (RDP) of an additive
Sampled Gaussian Mechanism (SGM). Its public interface consists of two methods:
compute_rdp(q, noise_multiplier, T, orders) computes RDP for SGM iterated
T times.
get_privacy_spent(orders, rdp, target_eps, target_delta) computes delta
(or eps) given RDP at multiple orders and
a target value for eps (or delta).
Example use:
Suppose that we have run an SGM applied to a function with l2-sensitivity 1.
Its parameters are given as a list of tuples (q1, sigma1, T1), ...,
(qk, sigma_k, Tk), and we wish to compute eps for a given delta.
The example code would be:
max_order = 32
orders = range(2, max_order + 1)
rdp = np.zeros_like(orders, dtype=float)
for q, sigma, T in parameters:
rdp += rdp_accountant.compute_rdp(q, sigma, T, orders)
eps, _, opt_order = rdp_accountant.get_privacy_spent(rdp, target_delta=delta)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import sys
import numpy as np
from scipy import special
import six
########################
# LOG-SPACE ARITHMETIC #
########################
def _log_add(logx, logy):
"""Add two numbers in the log space."""
a, b = min(logx, logy), max(logx, logy)
if a == -np.inf: # adding 0
return b
# Use exp(a) + exp(b) = (exp(a - b) + 1) * exp(b)
return math.log1p(math.exp(a - b)) + b # log1p(x) = log(x + 1)
def _log_sub(logx, logy):
"""Subtract two numbers in the log space. Answer must be non-negative."""
if logx < logy:
raise ValueError("The result of subtraction must be non-negative.")
if logy == -np.inf: # subtracting 0
return logx
if logx == logy:
return -np.inf # 0 is represented as -np.inf in the log space.
try:
# Use exp(x) - exp(y) = (exp(x - y) - 1) * exp(y).
return math.log(
math.expm1(logx - logy)) + logy # expm1(x) = exp(x) - 1
except OverflowError:
return logx
def _log_print(logx):
"""Pretty print."""
if logx < math.log(sys.float_info.max):
return "{}".format(math.exp(logx))
else:
return "exp({})".format(logx)
def _compute_log_a_int(q, sigma, alpha):
"""Compute log(A_alpha) for integer alpha. 0 < q < 1."""
assert isinstance(alpha, six.integer_types)
# Initialize with 0 in the log space.
log_a = -np.inf
for i in range(alpha + 1):
log_coef_i = (math.log(special.binom(alpha, i)) + i * math.log(q) +
(alpha - i) * math.log(1 - q))
s = log_coef_i + (i * i - i) / (2 * (sigma**2))
log_a = _log_add(log_a, s)
return float(log_a)
def _compute_log_a_frac(q, sigma, alpha):
"""Compute log(A_alpha) for fractional alpha. 0 < q < 1."""
# The two parts of A_alpha, integrals over (-inf,z0] and [z0, +inf), are
# initialized to 0 in the log space:
log_a0, log_a1 = -np.inf, -np.inf
i = 0
z0 = sigma**2 * math.log(1 / q - 1) + .5
while True: # do ... until loop
coef = special.binom(alpha, i)
log_coef = math.log(abs(coef))
j = alpha - i
log_t0 = log_coef + i * math.log(q) + j * math.log(1 - q)
log_t1 = log_coef + j * math.log(q) + i * math.log(1 - q)
log_e0 = math.log(.5) + _log_erfc((i - z0) / (math.sqrt(2) * sigma))
log_e1 = math.log(.5) + _log_erfc((z0 - j) / (math.sqrt(2) * sigma))
log_s0 = log_t0 + (i * i - i) / (2 * (sigma**2)) + log_e0
log_s1 = log_t1 + (j * j - j) / (2 * (sigma**2)) + log_e1
if coef > 0:
log_a0 = _log_add(log_a0, log_s0)
log_a1 = _log_add(log_a1, log_s1)
else:
log_a0 = _log_sub(log_a0, log_s0)
log_a1 = _log_sub(log_a1, log_s1)
i += 1
if max(log_s0, log_s1) < -30:
break
return _log_add(log_a0, log_a1)
def _compute_log_a(q, sigma, alpha):
"""Compute log(A_alpha) for any positive finite alpha."""
if float(alpha).is_integer():
return _compute_log_a_int(q, sigma, int(alpha))
else:
return _compute_log_a_frac(q, sigma, alpha)
def _log_erfc(x):
"""Compute log(erfc(x)) with high accuracy for large x."""
try:
return math.log(2) + special.log_ndtr(-x * 2**.5)
except NameError:
# If log_ndtr is not available, approximate as follows:
r = special.erfc(x)
if r == 0.0:
# Using the Laurent series at infinity for the tail of the erfc function:
# erfc(x) ~ exp(-x^2-.5/x^2+.625/x^4)/(x*pi^.5)
# To verify in Mathematica:
# Series[Log[Erfc[x]] + Log[x] + Log[Pi]/2 + x^2, {x, Infinity, 6}]
return (-math.log(math.pi) / 2 - math.log(x) - x**2 - .5 * x**-2 +
.625 * x**-4 - 37. / 24. * x**-6 + 353. / 64. * x**-8)
else:
return math.log(r)
def _compute_delta(orders, rdp, eps):
"""Compute delta given a list of RDP values and target epsilon.
Args:
orders: An array (or a scalar) of orders.
rdp: A list (or a scalar) of RDP guarantees.
eps: The target epsilon.
Returns:
Pair of (delta, optimal_order).
Raises:
ValueError: If input is malformed.
"""
orders_vec = np.atleast_1d(orders)
rdp_vec = np.atleast_1d(rdp)
if len(orders_vec) != len(rdp_vec):
raise ValueError("Input lists must have the same length.")
deltas = np.exp((rdp_vec - eps) * (orders_vec - 1))
idx_opt = np.argmin(deltas)
return min(deltas[idx_opt], 1.), orders_vec[idx_opt]
def _compute_eps(orders, rdp, delta):
"""Compute epsilon given a list of RDP values and target delta.
Args:
orders: An array (or a scalar) of orders.
rdp: A list (or a scalar) of RDP guarantees.
delta: The target delta.
Returns:
Pair of (eps, optimal_order).
Raises:
ValueError: If input is malformed.
"""
orders_vec = np.atleast_1d(orders)
rdp_vec = np.atleast_1d(rdp)
if len(orders_vec) != len(rdp_vec):
raise ValueError("Input lists must have the same length.")
eps = rdp_vec - math.log(delta) / (orders_vec - 1)
idx_opt = np.nanargmin(eps) # Ignore NaNs
return eps[idx_opt], orders_vec[idx_opt]
def _compute_rdp(q, sigma, alpha):
"""Compute RDP of the Sampled Gaussian mechanism at order alpha.
Args:
q: The sampling rate.
sigma: The std of the additive Gaussian noise.
alpha: The order at which RDP is computed.
Returns:
RDP at alpha, can be np.inf.
"""
if q == 0:
return 0
if q == 1.:
return alpha / (2 * sigma**2)
if np.isinf(alpha):
return np.inf
return _compute_log_a(q, sigma, alpha) / (alpha - 1)
def compute_rdp(q, noise_multiplier, steps, orders):
"""Compute RDP of the Sampled Gaussian Mechanism.
Args:
q: The sampling rate.
noise_multiplier: The ratio of the standard deviation of the Gaussian noise
to the l2-sensitivity of the function to which it is added.
steps: The number of steps.
orders: An array (or a scalar) of RDP orders.
Returns:
The RDPs at all orders, can be np.inf.
"""
if np.isscalar(orders):
rdp = _compute_rdp(q, noise_multiplier, orders)
else:
rdp = np.array(
[_compute_rdp(q, noise_multiplier, order) for order in orders])
return rdp * steps
def get_privacy_spent(orders, rdp, target_eps=None, target_delta=None):
"""Compute delta (or eps) for given eps (or delta) from RDP values.
Args:
orders: An array (or a scalar) of RDP orders.
rdp: An array of RDP values. Must be of the same length as the orders list.
target_eps: If not None, the epsilon for which we compute the corresponding
delta.
target_delta: If not None, the delta for which we compute the corresponding
epsilon. Exactly one of target_eps and target_delta must be None.
Returns:
eps, delta, opt_order.
Raises:
ValueError: If target_eps and target_delta are messed up.
"""
if target_eps is None and target_delta is None:
raise ValueError(
"Exactly one out of eps and delta must be None. (Both are).")
if target_eps is not None and target_delta is not None:
raise ValueError(
"Exactly one out of eps and delta must be None. (None is).")
if target_eps is not None:
delta, opt_order = _compute_delta(orders, rdp, target_eps)
return target_eps, delta, opt_order
else:
eps, opt_order = _compute_eps(orders, rdp, target_delta)
return eps, target_delta, opt_order
| _____no_output_____ | Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
dp query | # Copyright 2018, The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An interface for differentially private query mechanisms.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
class DPQuery(object):
"""Interface for differentially private query mechanisms."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def initial_global_state(self):
"""Returns the initial global state for the DPQuery."""
pass
@abc.abstractmethod
def derive_sample_params(self, global_state):
"""Given the global state, derives parameters to use for the next sample.
Args:
global_state: The current global state.
Returns:
Parameters to use to process records in the next sample.
"""
pass
@abc.abstractmethod
def initial_sample_state(self, global_state, tensors):
"""Returns an initial state to use for the next sample.
Args:
global_state: The current global state.
tensors: A structure of tensors used as a template to create the initial
sample state.
Returns: An initial sample state.
"""
pass
@abc.abstractmethod
def accumulate_record(self, params, sample_state, record):
"""Accumulates a single record into the sample state.
Args:
params: The parameters for the sample.
sample_state: The current sample state.
record: The record to accumulate.
Returns:
The updated sample state.
"""
pass
@abc.abstractmethod
def get_noised_result(self, sample_state, global_state):
"""Gets query result after all records of sample have been accumulated.
Args:
sample_state: The sample state after all records have been accumulated.
global_state: The global state.
Returns:
A tuple (result, new_global_state) where "result" is the result of the
query and "new_global_state" is the updated global state.
"""
pass
| _____no_output_____ | Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
gausian query | # Copyright 2018, The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Implements DPQuery interface for Gaussian average queries.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import tensorflow as tf
nest = tf.contrib.framework.nest
class GaussianSumQuery(DPQuery):
"""Implements DPQuery interface for Gaussian sum queries.
Accumulates clipped vectors, then adds Gaussian noise to the sum.
"""
# pylint: disable=invalid-name
_GlobalState = collections.namedtuple(
'_GlobalState', ['l2_norm_clip', 'stddev'])
def __init__(self, l2_norm_clip, stddev):
"""Initializes the GaussianSumQuery.
Args:
l2_norm_clip: The clipping norm to apply to the global norm of each
record.
stddev: The stddev of the noise added to the sum.
"""
self._l2_norm_clip = l2_norm_clip
self._stddev = stddev
def initial_global_state(self):
"""Returns the initial global state for the GaussianSumQuery."""
return self._GlobalState(float(self._l2_norm_clip), float(self._stddev))
def derive_sample_params(self, global_state):
"""Given the global state, derives parameters to use for the next sample.
Args:
global_state: The current global state.
Returns:
Parameters to use to process records in the next sample.
"""
return global_state.l2_norm_clip
def initial_sample_state(self, global_state, tensors):
"""Returns an initial state to use for the next sample.
Args:
global_state: The current global state.
tensors: A structure of tensors used as a template to create the initial
sample state.
Returns: An initial sample state.
"""
del global_state # unused.
return nest.map_structure(tf.zeros_like, tensors)
def accumulate_record(self, params, sample_state, record):
"""Accumulates a single record into the sample state.
Args:
params: The parameters for the sample.
sample_state: The current sample state.
record: The record to accumulate.
Returns:
The updated sample state.
"""
l2_norm_clip = params
record_as_list = nest.flatten(record)
clipped_as_list, _ = tf.clip_by_global_norm(record_as_list, l2_norm_clip)
clipped = nest.pack_sequence_as(record, clipped_as_list)
return nest.map_structure(tf.add, sample_state, clipped)
def get_noised_result(self, sample_state, global_state, add_noise=True):
"""Gets noised sum after all records of sample have been accumulated.
Args:
sample_state: The sample state after all records have been accumulated.
global_state: The global state.
Returns:
A tuple (estimate, new_global_state) where "estimate" is the estimated
sum of the records and "new_global_state" is the updated global state.
"""
def add_noise(v):
if add_noise:
return v + tf.random_normal(tf.shape(v), stddev=global_state.stddev)
else:
return v
return nest.map_structure(add_noise, sample_state), global_state
class GaussianAverageQuery(DPQuery):
"""Implements DPQuery interface for Gaussian average queries.
Accumulates clipped vectors, adds Gaussian noise, and normalizes.
Note that we use "fixed-denominator" estimation: the denominator should be
specified as the expected number of records per sample. Accumulating the
denominator separately would also be possible but would be produce a higher
variance estimator.
"""
# pylint: disable=invalid-name
_GlobalState = collections.namedtuple(
'_GlobalState', ['sum_state', 'denominator'])
def __init__(self, l2_norm_clip, sum_stddev, denominator):
"""Initializes the GaussianAverageQuery.
Args:
l2_norm_clip: The clipping norm to apply to the global norm of each
record.
sum_stddev: The stddev of the noise added to the sum (before
normalization).
denominator: The normalization constant (applied after noise is added to
the sum).
"""
self._numerator = GaussianSumQuery(l2_norm_clip, sum_stddev)
self._denominator = denominator
def initial_global_state(self):
"""Returns the initial global state for the GaussianAverageQuery."""
sum_global_state = self._numerator.initial_global_state()
return self._GlobalState(sum_global_state, float(self._denominator))
def derive_sample_params(self, global_state):
"""Given the global state, derives parameters to use for the next sample.
Args:
global_state: The current global state.
Returns:
Parameters to use to process records in the next sample.
"""
return self._numerator.derive_sample_params(global_state.sum_state)
def initial_sample_state(self, global_state, tensors):
"""Returns an initial state to use for the next sample.
Args:
global_state: The current global state.
tensors: A structure of tensors used as a template to create the initial
sample state.
Returns: An initial sample state.
"""
# GaussianAverageQuery has no state beyond the sum state.
return self._numerator.initial_sample_state(global_state.sum_state, tensors)
def accumulate_record(self, params, sample_state, record):
"""Accumulates a single record into the sample state.
Args:
params: The parameters for the sample.
sample_state: The current sample state.
record: The record to accumulate.
Returns:
The updated sample state.
"""
return self._numerator.accumulate_record(params, sample_state, record)
def get_noised_result(self, sample_state, global_state, add_noise=True):
"""Gets noised average after all records of sample have been accumulated.
Args:
sample_state: The sample state after all records have been accumulated.
global_state: The global state.
Returns:
A tuple (estimate, new_global_state) where "estimate" is the estimated
average of the records and "new_global_state" is the updated global state.
"""
noised_sum, new_sum_global_state = self._numerator.get_noised_result(
sample_state, global_state.sum_state, add_noise)
new_global_state = self._GlobalState(
new_sum_global_state, global_state.denominator)
def normalize(v):
return tf.truediv(v, global_state.denominator)
return nest.map_structure(normalize, noised_sum), new_global_state
|
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
| Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
our_dp_optimizer | # Copyright 2018, The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Differentially private optimizers for TensorFlow."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
def make_optimizer_class(cls):
"""Constructs a DP optimizer class from an existing one."""
if (tf.train.Optimizer.compute_gradients.__code__ is
not cls.compute_gradients.__code__):
tf.logging.warning(
'WARNING: Calling make_optimizer_class() on class %s that overrides '
'method compute_gradients(). Check to ensure that '
'make_optimizer_class() does not interfere with overridden version.',
cls.__name__)
class DPOptimizerClass(cls):
"""Differentially private subclass of given class cls."""
def __init__(
self,
l2_norm_clip,
noise_multiplier,
dp_average_query,
num_microbatches,
unroll_microbatches=False,
*args, # pylint: disable=keyword-arg-before-vararg
**kwargs):
super(DPOptimizerClass, self).__init__(*args, **kwargs)
self._dp_average_query = dp_average_query
self._num_microbatches = num_microbatches
self._global_state = self._dp_average_query.initial_global_state()
# TODO(b/122613513): Set unroll_microbatches=True to avoid this bug.
# Beware: When num_microbatches is large (>100), enabling this parameter
# may cause an OOM error.
self._unroll_microbatches = unroll_microbatches
def dp_compute_gradients(self,
loss,
var_list,
gate_gradients=tf.train.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
grad_loss=None,
add_noise=True):
# Note: it would be closer to the correct i.i.d. sampling of records if
# we sampled each microbatch from the appropriate binomial distribution,
# although that still wouldn't be quite correct because it would be
# sampling from the dataset without replacement.
microbatches_losses = tf.reshape(loss,
[self._num_microbatches, -1])
sample_params = (self._dp_average_query.derive_sample_params(
self._global_state))
def process_microbatch(i, sample_state):
"""Process one microbatch (record) with privacy helper."""
grads, _ = zip(*super(cls, self).compute_gradients(
tf.gather(microbatches_losses, [i]), var_list,
gate_gradients, aggregation_method,
colocate_gradients_with_ops, grad_loss))
# Converts tensor to list to replace None gradients with zero
grads1 = list(grads)
for inx in range(0, len(grads)):
if (grads[inx] == None):
grads1[inx] = tf.zeros_like(var_list[inx])
grads_list = grads1
sample_state = self._dp_average_query.accumulate_record(
sample_params, sample_state, grads_list)
return sample_state
if var_list is None:
var_list = (tf.trainable_variables() + tf.get_collection(
tf.GraphKeys.TRAINABLE_RESOURCE_VARIABLES))
sample_state = self._dp_average_query.initial_sample_state(
self._global_state, var_list)
if self._unroll_microbatches:
for idx in range(self._num_microbatches):
sample_state = process_microbatch(idx, sample_state)
else:
# Use of while_loop here requires that sample_state be a nested
# structure of tensors. In general, we would prefer to allow it to be
# an arbitrary opaque type.
cond_fn = lambda i, _: tf.less(i, self._num_microbatches)
body_fn = lambda i, state: [
tf.add(i, 1), process_microbatch(i, state)
]
idx = tf.constant(0)
_, sample_state = tf.while_loop(cond_fn, body_fn,
[idx, sample_state])
final_grads, self._global_state = (
self._dp_average_query.get_noised_result(
sample_state, self._global_state, add_noise))
return (final_grads)
def minimize(self,
d_loss_real,
d_loss_fake,
global_step=None,
var_list=None,
gate_gradients=tf.train.Optimizer.GATE_OP,
aggregation_method=None,
colocate_gradients_with_ops=False,
name=None,
grad_loss=None):
"""Minimize using sanitized gradients
Args:
d_loss_real: the loss tensor for real data
d_loss_fake: the loss tensor for fake data
global_step: the optional global step.
var_list: the optional variables.
name: the optional name.
Returns:
the operation that runs one step of DP gradient descent.
"""
# First validate the var_list
if var_list is None:
var_list = tf.trainable_variables()
for var in var_list:
if not isinstance(var, tf.Variable):
raise TypeError("Argument is not a variable.Variable: %s" %
var)
# ------------------ OUR METHOD --------------------------------
r_grads = self.dp_compute_gradients(
d_loss_real,
var_list=var_list,
gate_gradients=gate_gradients,
aggregation_method=aggregation_method,
colocate_gradients_with_ops=colocate_gradients_with_ops,
grad_loss=grad_loss, add_noise = True)
f_grads = self.dp_compute_gradients(
d_loss_fake,
var_list=var_list,
gate_gradients=gate_gradients,
aggregation_method=aggregation_method,
colocate_gradients_with_ops=colocate_gradients_with_ops,
grad_loss=grad_loss,
add_noise=False)
# Compute the overall gradients
s_grads = [(r_grads[idx] + f_grads[idx])
for idx in range(len(r_grads))]
sanitized_grads_and_vars = list(zip(s_grads, var_list))
self._assert_valid_dtypes(
[v for g, v in sanitized_grads_and_vars if g is not None])
# Apply the overall gradients
apply_grads = self.apply_gradients(sanitized_grads_and_vars,
global_step=global_step,
name=name)
return apply_grads
# -----------------------------------------------------------------
return DPOptimizerClass
def make_gaussian_optimizer_class(cls):
"""Constructs a DP optimizer with Gaussian averaging of updates."""
class DPGaussianOptimizerClass(make_optimizer_class(cls)):
"""DP subclass of given class cls using Gaussian averaging."""
def __init__(
self,
l2_norm_clip,
noise_multiplier,
num_microbatches,
unroll_microbatches=False,
*args, # pylint: disable=keyword-arg-before-vararg
**kwargs):
dp_average_query = GaussianAverageQuery(
l2_norm_clip, l2_norm_clip * noise_multiplier,
num_microbatches)
self.l2_norm_clip = l2_norm_clip
self.noise_multiplier = noise_multiplier
super(DPGaussianOptimizerClass,
self).__init__(l2_norm_clip, noise_multiplier,
dp_average_query, num_microbatches,
unroll_microbatches, *args, **kwargs)
return DPGaussianOptimizerClass
DPAdagradOptimizer = make_optimizer_class(tf.train.AdagradOptimizer)
DPAdamOptimizer = make_optimizer_class(tf.train.AdamOptimizer)
DPGradientDescentOptimizer = make_optimizer_class(
tf.train.GradientDescentOptimizer)
DPAdagradGaussianOptimizer = make_gaussian_optimizer_class(
tf.train.AdagradOptimizer)
DPAdamGaussianOptimizer = make_gaussian_optimizer_class(tf.train.AdamOptimizer)
DPGradientDescentGaussianOptimizer = make_gaussian_optimizer_class(
tf.train.GradientDescentOptimizer)
| _____no_output_____ | Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
gan.ops | """
Most codes from https://github.com/carpedm20/DCGAN-tensorflow
"""
import math
import numpy as np
import tensorflow as tf
if "concat_v2" in dir(tf):
def concat(tensors, axis, *args, **kwargs):
return tf.concat_v2(tensors, axis, *args, **kwargs)
else:
def concat(tensors, axis, *args, **kwargs):
return tf.concat(tensors, axis, *args, **kwargs)
def bn(x, is_training, scope):
return tf.contrib.layers.batch_norm(x,
decay=0.9,
updates_collections=None,
epsilon=1e-5,
scale=True,
is_training=is_training,
scope=scope)
def conv_out_size_same(size, stride):
return int(math.ceil(float(size) / float(stride)))
def conv_cond_concat(x, y):
"""Concatenate conditioning vector on feature map axis."""
x_shapes = x.get_shape()
y_shapes = y.get_shape()
return concat(
[x, y * tf.ones([x_shapes[0], x_shapes[1], x_shapes[2], y_shapes[3]])],
3)
def conv2d(input_,
output_dim,
k_h=5,
k_w=5,
d_h=2,
d_w=2,
stddev=0.02,
name="conv2d"):
with tf.variable_scope(name):
w = tf.get_variable(
'w', [k_h, k_w, input_.get_shape()[-1], output_dim],
initializer=tf.truncated_normal_initializer(stddev=stddev))
conv = tf.nn.conv2d(input_,
w,
strides=[1, d_h, d_w, 1],
padding='SAME')
biases = tf.get_variable('biases', [output_dim],
initializer=tf.constant_initializer(0.0))
conv = tf.reshape(tf.nn.bias_add(conv, biases), conv.get_shape())
return conv
def deconv2d(input_,
output_shape,
k_h=5,
k_w=5,
d_h=2,
d_w=2,
name="deconv2d",
stddev=0.02,
with_w=False):
with tf.variable_scope(name):
# filter : [height, width, output_channels, in_channels]
w = tf.get_variable(
'w', [k_h, k_w, output_shape[-1],
input_.get_shape()[-1]],
initializer=tf.random_normal_initializer(stddev=stddev))
try:
deconv = tf.nn.conv2d_transpose(input_,
w,
output_shape=output_shape,
strides=[1, d_h, d_w, 1])
# Support for verisons of TensorFlow before 0.7.0
except AttributeError:
deconv = tf.nn.deconv2d(input_,
w,
output_shape=output_shape,
strides=[1, d_h, d_w, 1])
biases = tf.get_variable('biases', [output_shape[-1]],
initializer=tf.constant_initializer(0.0))
deconv = tf.reshape(tf.nn.bias_add(deconv, biases), deconv.get_shape())
if with_w:
return deconv, w, biases
else:
return deconv
def lrelu(x, leak=0.2, name="lrelu"):
return tf.maximum(x, leak * x)
def linear(input_,
output_size,
scope=None,
stddev=0.02,
bias_start=0.0,
with_w=False):
shape = input_.get_shape().as_list()
with tf.variable_scope(scope or "Linear"):
matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,
tf.random_normal_initializer(stddev=stddev))
bias = tf.get_variable("bias", [output_size],
initializer=tf.constant_initializer(bias_start))
if with_w:
return tf.matmul(input_, matrix) + bias, matrix, bias
else:
return tf.matmul(input_, matrix) + bias
| _____no_output_____ | Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
OUR DP CGAN | # -*- coding: utf-8 -*-
from __future__ import division
from keras.datasets import cifar10
from mlxtend.data import loadlocal_mnist
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
class OUR_DP_CGAN(object):
model_name = "OUR_DP_CGAN" # name for checkpoint
def __init__(self, sess, epoch, batch_size, z_dim, epsilon, delta, sigma,
clip_value, lr, dataset_name, base_dir, checkpoint_dir,
result_dir, log_dir):
self.sess = sess
self.dataset_name = dataset_name
self.base_dir = base_dir
self.checkpoint_dir = checkpoint_dir
self.result_dir = result_dir
self.log_dir = log_dir
self.epoch = epoch
self.batch_size = batch_size
self.epsilon = epsilon
self.delta = delta
self.noise_multiplier = sigma
self.l2_norm_clip = clip_value
self.lr = lr
if dataset_name == 'mnist' or dataset_name == 'fashion-mnist':
# parameters
self.input_height = 28
self.input_width = 28
self.output_height = 28
self.output_width = 28
self.z_dim = z_dim # dimension of noise-vector
self.y_dim = 10 # dimension of condition-vector (label)
self.c_dim = 1
# train
self.learningRateD = self.lr
self.learningRateG = self.learningRateD * 5
self.beta1 = 0.5
self.beta2 = 0.99
# test
self.sample_num = 64 # number of generated images to be saved
# load mnist
self.data_X, self.data_y = load_mnist(train = True)
# get number of batches for a single epoch
self.num_batches = len(self.data_X) // self.batch_size
elif dataset_name == 'cifar10':
# parameters
self.input_height = 32
self.input_width = 32
self.output_height = 32
self.output_width = 32
self.z_dim = 100 # dimension of noise-vector
self.y_dim = 10 # dimension of condition-vector (label)
self.c_dim = 3 # color dimension
# train
# self.learning_rate = 0.0002 # 1e-3, 1e-4
self.learningRateD = 1e-3
self.learningRateG = 1e-4
self.beta1 = 0.5
self.beta2 = 0.99
# test
self.sample_num = 64 # number of generated images to be saved
# load cifar10
self.data_X, self.data_y = load_cifar10(train=True)
self.num_batches = len(self.data_X) // self.batch_size
else:
raise NotImplementedError
def discriminator(self, x, y, is_training=True, reuse=False):
# Network Architecture is exactly same as in infoGAN (https://arxiv.org/abs/1606.03657)
# Architecture : (64)4c2s-(128)4c2s_BL-FC1024_BL-FC1_S
with tf.variable_scope("discriminator", reuse=reuse):
# merge image and label
if (self.dataset_name == "mnist"):
y = tf.reshape(y, [self.batch_size, 1, 1, self.y_dim])
x = conv_cond_concat(x, y)
net = lrelu(conv2d(x, 64, 4, 4, 2, 2, name='d_conv1'))
net = lrelu(
bn(conv2d(net, 128, 4, 4, 2, 2, name='d_conv2'),
is_training=is_training,
scope='d_bn2'))
net = tf.reshape(net, [self.batch_size, -1])
net = lrelu(
bn(linear(net, 1024, scope='d_fc3'),
is_training=is_training,
scope='d_bn3'))
out_logit = linear(net, 1, scope='d_fc4')
out = tf.nn.sigmoid(out_logit)
elif (self.dataset_name == "cifar10"):
y = tf.reshape(y, [self.batch_size, 1, 1, self.y_dim])
x = conv_cond_concat(x, y)
lrelu_slope = 0.2
kernel_size = 5
w_init = tf.contrib.layers.xavier_initializer()
net = lrelu(
conv2d(x,
64,
5,
5,
2,
2,
name='d_conv1' + '_' + self.dataset_name))
net = lrelu(
bn(conv2d(net,
128,
5,
5,
2,
2,
name='d_conv2' + '_' + self.dataset_name),
is_training=is_training,
scope='d_bn2'))
net = lrelu(
bn(conv2d(net,
256,
5,
5,
2,
2,
name='d_conv3' + '_' + self.dataset_name),
is_training=is_training,
scope='d_bn3'))
net = lrelu(
bn(conv2d(net,
512,
5,
5,
2,
2,
name='d_conv4' + '_' + self.dataset_name),
is_training=is_training,
scope='d_bn4'))
net = tf.reshape(net, [self.batch_size, -1])
out_logit = linear(net,
1,
scope='d_fc5' + '_' + self.dataset_name)
out = tf.nn.sigmoid(out_logit)
return out, out_logit
def generator(self, z, y, is_training=True, reuse=False):
# Network Architecture is exactly same as in infoGAN (https://arxiv.org/abs/1606.03657)
# Architecture : FC1024_BR-FC7x7x128_BR-(64)4dc2s_BR-(1)4dc2s_S
with tf.variable_scope("generator", reuse=reuse):
if (self.dataset_name == "mnist"):
# merge noise and label
z = concat([z, y], 1)
net = tf.nn.relu(
bn(linear(z, 1024, scope='g_fc1'),
is_training=is_training,
scope='g_bn1'))
net = tf.nn.relu(
bn(linear(net, 128 * 7 * 7, scope='g_fc2'),
is_training=is_training,
scope='g_bn2'))
net = tf.reshape(net, [self.batch_size, 7, 7, 128])
net = tf.nn.relu(
bn(deconv2d(net, [self.batch_size, 14, 14, 64],
4,
4,
2,
2,
name='g_dc3'),
is_training=is_training,
scope='g_bn3'))
out = tf.nn.sigmoid(
deconv2d(net, [self.batch_size, 28, 28, 1],
4,
4,
2,
2,
name='g_dc4'))
elif (self.dataset_name == "cifar10"):
h_size = 32
h_size_2 = 16
h_size_4 = 8
h_size_8 = 4
h_size_16 = 2
z = concat([z, y], 1)
net = linear(z,
512 * h_size_16 * h_size_16,
scope='g_fc1' + '_' + self.dataset_name)
net = tf.nn.relu(
bn(tf.reshape(
net, [self.batch_size, h_size_16, h_size_16, 512]),
is_training=is_training,
scope='g_bn1'))
net = tf.nn.relu(
bn(deconv2d(net,
[self.batch_size, h_size_8, h_size_8, 256],
5,
5,
2,
2,
name='g_dc2' + '_' + self.dataset_name),
is_training=is_training,
scope='g_bn2'))
net = tf.nn.relu(
bn(deconv2d(net,
[self.batch_size, h_size_4, h_size_4, 128],
5,
5,
2,
2,
name='g_dc3' + '_' + self.dataset_name),
is_training=is_training,
scope='g_bn3'))
net = tf.nn.relu(
bn(deconv2d(net, [self.batch_size, h_size_2, h_size_2, 64],
5,
5,
2,
2,
name='g_dc4' + '_' + self.dataset_name),
is_training=is_training,
scope='g_bn4'))
out = tf.nn.tanh(
deconv2d(net, [
self.batch_size, self.output_height, self.output_width,
self.c_dim
],
5,
5,
2,
2,
name='g_dc5' + '_' + self.dataset_name))
return out
def build_model(self):
# some parameters
image_dims = [self.input_height, self.input_width, self.c_dim]
bs = self.batch_size
""" Graph Input """
# images
self.inputs = tf.placeholder(tf.float32, [bs] + image_dims,
name='real_images')
# labels
self.y = tf.placeholder(tf.float32, [bs, self.y_dim], name='y')
# noises
self.z = tf.placeholder(tf.float32, [bs, self.z_dim], name='z')
""" Loss Function """
# output of D for real images
D_real, D_real_logits = self.discriminator(self.inputs,
self.y,
is_training=True,
reuse=False)
# output of D for fake images
G = self.generator(self.z, self.y, is_training=True, reuse=False)
D_fake, D_fake_logits = self.discriminator(G,
self.y,
is_training=True,
reuse=True)
# get loss for discriminator
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_real_logits, labels=tf.ones_like(D_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_fake_logits, labels=tf.zeros_like(D_fake)))
self.d_loss_real_vec = tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_real_logits, labels=tf.ones_like(D_real))
self.d_loss_fake_vec = tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_fake_logits, labels=tf.zeros_like(D_fake))
self.d_loss = d_loss_real + d_loss_fake
# get loss for generator
self.g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(
logits=D_fake_logits, labels=tf.ones_like(D_fake)))
""" Training """
# divide trainable variables into a group for D and a group for G
t_vars = tf.trainable_variables()
d_vars = [
var for var in t_vars if var.name.startswith('discriminator')
]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# optimizers
with tf.control_dependencies(tf.get_collection(
tf.GraphKeys.UPDATE_OPS)):
d_optim_init = DPGradientDescentGaussianOptimizer(
l2_norm_clip=self.l2_norm_clip,
noise_multiplier=self.noise_multiplier,
num_microbatches=self.batch_size,
learning_rate=self.learningRateD)
global_step = tf.train.get_global_step()
self.d_optim = d_optim_init.minimize(
d_loss_real=self.d_loss_real_vec,
d_loss_fake=self.d_loss_fake_vec,
global_step=global_step,
var_list=d_vars)
optimizer = DPGradientDescentGaussianOptimizer(
l2_norm_clip=self.l2_norm_clip,
noise_multiplier=self.noise_multiplier,
num_microbatches=self.batch_size,
learning_rate=self.learningRateD)
self.g_optim = tf.train.GradientDescentOptimizer(self.learningRateG) \
.minimize(self.g_loss, var_list=g_vars)
"""" Testing """
self.fake_images = self.generator(self.z,
self.y,
is_training=False,
reuse=True)
""" Summary """
d_loss_real_sum = tf.summary.scalar("d_loss_real", d_loss_real)
d_loss_fake_sum = tf.summary.scalar("d_loss_fake", d_loss_fake)
d_loss_sum = tf.summary.scalar("d_loss", self.d_loss)
g_loss_sum = tf.summary.scalar("g_loss", self.g_loss)
# final summary operations
self.g_sum = tf.summary.merge([d_loss_fake_sum, g_loss_sum])
self.d_sum = tf.summary.merge([d_loss_real_sum, d_loss_sum])
def train(self):
# initialize all variables
tf.global_variables_initializer().run()
# graph inputs for visualize training results
self.sample_z = np.random.uniform(-1,
1,
size=(self.batch_size, self.z_dim))
self.test_labels = self.data_y[0:self.batch_size]
# saver to save model
self.saver = tf.train.Saver()
# summary writer
self.writer = tf.summary.FileWriter(
self.log_dir + '/' + self.model_name, self.sess.graph)
# restore check-point if it exits
could_load, checkpoint_counter = self.load(self.checkpoint_dir)
if could_load:
start_epoch = (int)(checkpoint_counter / self.num_batches)
start_batch_id = checkpoint_counter - start_epoch * self.num_batches
counter = checkpoint_counter
print(" [*] Load SUCCESS")
else:
start_epoch = 0
start_batch_id = 0
counter = 1
print(" [!] Load failed...")
# loop for epoch
epoch = start_epoch
should_terminate = False
while (epoch < self.epoch and not should_terminate):
# get batch data
for idx in range(start_batch_id, self.num_batches):
batch_images = self.data_X[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_labels = self.data_y[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_z = np.random.uniform(
-1, 1, [self.batch_size, self.z_dim]).astype(np.float32)
# update D network
_, summary_str, d_loss = self.sess.run(
[self.d_optim, self.d_sum, self.d_loss],
feed_dict={
self.inputs: batch_images,
self.y: batch_labels,
self.z: batch_z
})
self.writer.add_summary(summary_str, counter)
eps = self.compute_epsilon((epoch * self.num_batches) + idx)
if (eps > self.epsilon):
should_terminate = True
print("TERMINATE !! Run out of Privacy Budget.....")
epoch = self.epoch
break
# update G network
_, summary_str, g_loss = self.sess.run(
[self.g_optim, self.g_sum, self.g_loss],
feed_dict={
self.inputs: batch_images,
self.y: batch_labels,
self.z: batch_z
})
self.writer.add_summary(summary_str, counter)
# display training status
counter += 1
_ = self.sess.run(self.fake_images,
feed_dict={
self.z: self.sample_z,
self.y: self.test_labels
})
# save training results for every 100 steps
if np.mod(counter, 100) == 0:
print("Iteration : " + str(idx) + " Eps: " + str(eps))
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: self.sample_z,
self.y: self.test_labels
})
tot_num_samples = min(self.sample_num, self.batch_size)
manifold_h = int(np.floor(np.sqrt(tot_num_samples)))
manifold_w = int(np.floor(np.sqrt(tot_num_samples)))
save_images(
samples[:manifold_h * manifold_w, :, :, :],
[manifold_h, manifold_w],
check_folder(self.result_dir + '/' + self.model_dir) +
'/' + self.model_name +
'_train_{:02d}_{:04d}.png'.format(epoch, idx))
epoch = epoch + 1
# After an epoch, start_batch_id is set to zero
# non-zero value is only for the first epoch after loading pre-trained model
start_batch_id = 0
# save model
self.save(self.checkpoint_dir, counter)
# show temporal results
if (self.dataset_name == 'mnist'):
self.visualize_results_MNIST(epoch)
elif (self.dataset_name == 'cifar10'):
self.visualize_results_CIFAR(epoch)
# save model for final step
self.save(self.checkpoint_dir, counter)
def compute_fpr_tpr_roc(Y_test, Y_score):
n_classes = Y_score.shape[1]
false_positive_rate = dict()
true_positive_rate = dict()
roc_auc = dict()
for class_cntr in range(n_classes):
false_positive_rate[class_cntr], true_positive_rate[
class_cntr], _ = roc_curve(Y_test[:, class_cntr],
Y_score[:, class_cntr])
roc_auc[class_cntr] = auc(false_positive_rate[class_cntr],
true_positive_rate[class_cntr])
# Compute micro-average ROC curve and ROC area
false_positive_rate["micro"], true_positive_rate[
"micro"], _ = roc_curve(Y_test.ravel(), Y_score.ravel())
roc_auc["micro"] = auc(false_positive_rate["micro"],
true_positive_rate["micro"])
return false_positive_rate, true_positive_rate, roc_auc
def classify(X_train,
Y_train,
X_test,
classiferName,
random_state_value=0):
if classiferName == "lr":
classifier = OneVsRestClassifier(
LogisticRegression(solver='lbfgs',
multi_class='multinomial',
random_state=random_state_value))
elif classiferName == "mlp":
classifier = OneVsRestClassifier(
MLPClassifier(random_state=random_state_value, alpha=1))
elif classiferName == "rf":
classifier = OneVsRestClassifier(
RandomForestClassifier(n_estimators=100,
random_state=random_state_value))
else:
print("Classifier not in the list!")
exit()
Y_score = classifier.fit(X_train, Y_train).predict_proba(X_test)
return Y_score
batch_size = int(self.batch_size)
if (self.dataset_name == "mnist"):
n_class = np.zeros(10)
n_class[0] = 5923 - batch_size
n_class[1] = 6742
n_class[2] = 5958
n_class[3] = 6131
n_class[4] = 5842
n_class[5] = 5421
n_class[6] = 5918
n_class[7] = 6265
n_class[8] = 5851
n_class[9] = 5949
Z_sample = np.random.uniform(-1, 1, size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + 0
y_one_hot = np.zeros((batch_size, self.y_dim))
y_one_hot[np.arange(batch_size), y] = 1
images = self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot
})
for classLabel in range(0, 10):
for _ in range(0, int(n_class[classLabel]), batch_size):
Z_sample = np.random.uniform(-1,
1,
size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + classLabel
y_one_hot_init = np.zeros((batch_size, self.y_dim))
y_one_hot_init[np.arange(batch_size), y] = 1
images = np.append(images,
self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot_init
}),
axis=0)
y_one_hot = np.append(y_one_hot, y_one_hot_init, axis=0)
X_test, Y_test = load_mnist(train = False)
Y_test = [int(y) for y in Y_test]
classes = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Y_test = label_binarize(Y_test, classes=classes)
if (self.dataset_name == "cifar10"):
n_class = np.zeros(10)
for t in range(1, 10):
n_class[t] = 1000
Z_sample = np.random.uniform(-1, 1, size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + 0
y_one_hot = np.zeros((batch_size, self.y_dim))
y_one_hot[np.arange(batch_size), y] = 1
images = self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot
})
for classLabel in range(0, 10):
for _ in range(0, int(n_class[classLabel]), batch_size):
Z_sample = np.random.uniform(-1,
1,
size=(batch_size, self.z_dim))
y = np.zeros(batch_size, dtype=np.int64) + classLabel
y_one_hot_init = np.zeros((batch_size, self.y_dim))
y_one_hot_init[np.arange(batch_size), y] = 1
images = np.append(images,
self.sess.run(self.fake_images,
feed_dict={
self.z: Z_sample,
self.y: y_one_hot_init
}),
axis=0)
y_one_hot = np.append(y_one_hot, y_one_hot_init, axis=0)
X_test, Y_test = load_cifar10(train=False)
classes = range(0, 10)
Y_test = label_binarize(Y_test, classes=classes)
print(" Classifying - Logistic Regression...")
TwoDim_images = images.reshape(np.shape(images)[0], -2)
X_test = X_test.reshape(np.shape(X_test)[0], -2)
Y_score = classify(TwoDim_images,
y_one_hot,
X_test,
"lr",
random_state_value=30)
false_positive_rate, true_positive_rate, roc_auc = compute_fpr_tpr_roc(
Y_test, Y_score)
classification_results_fname = self.base_dir + "CGAN_AuROC.txt"
classification_results = open(classification_results_fname, "w")
classification_results.write(
"\nepsilon : {:.2f}, sigma: {:.2f}, clipping value: {:.2f}".format(
(self.epsilon), round(self.noise_multiplier, 2),
round(self.l2_norm_clip, 2)))
classification_results.write("\nAuROC - logistic Regression: " +
str(roc_auc["micro"]))
classification_results.write(
"\n--------------------------------------------------------------------\n"
)
print(" Classifying - Random Forest...")
Y_score = classify(TwoDim_images,
y_one_hot,
X_test,
"rf",
random_state_value=30)
print(" Computing ROC - Random Forest ...")
false_positive_rate, true_positive_rate, roc_auc = compute_fpr_tpr_roc(
Y_test, Y_score)
classification_results.write(
"\nepsilon : {:.2f}, sigma: {:.2f}, clipping value: {:.2f}".format(
(self.epsilon), round(self.noise_multiplier, 2),
round(self.l2_norm_clip, 2)))
classification_results.write("\nAuROC - random Forest: " +
str(roc_auc["micro"]))
classification_results.write(
"\n--------------------------------------------------------------------\n"
)
print(" Classifying - multilayer Perceptron ...")
Y_score = classify(TwoDim_images,
y_one_hot,
X_test,
"mlp",
random_state_value=30)
print(" Computing ROC - Multilayer Perceptron ...")
false_positive_rate, true_positive_rate, roc_auc = compute_fpr_tpr_roc(
Y_test, Y_score)
classification_results.write(
"\nepsilon : {:.2f}, sigma: {:.2f}, clipping value: {:.2f}".format(
(self.epsilon), round(self.noise_multiplier, 2),
round(self.l2_norm_clip, 2)))
classification_results.write("\nAuROC - multilayer Perceptron: " +
str(roc_auc["micro"]))
classification_results.write(
"\n--------------------------------------------------------------------\n"
)
# save model for final step
self.save(self.checkpoint_dir, counter)
def compute_epsilon(self, steps):
"""Computes epsilon value for given hyperparameters."""
if self.noise_multiplier == 0.0:
return float('inf')
orders = [1 + x / 10. for x in range(1, 100)] + list(range(12, 64))
sampling_probability = self.batch_size / 60000
rdp = compute_rdp(q=sampling_probability,
noise_multiplier=self.noise_multiplier,
steps=steps,
orders=orders)
# Delta is set to 1e-5 because MNIST has 60000 training points.
return get_privacy_spent(orders, rdp, target_delta=1e-5)[0]
# CIFAR 10
def visualize_results_CIFAR(self, epoch):
tot_num_samples = min(self.sample_num, self.batch_size) # 64, 100
image_frame_dim = int(np.floor(np.sqrt(tot_num_samples))) # 8
""" random condition, random noise """
y = np.random.choice(self.y_dim, self.batch_size)
y_one_hot = np.zeros((self.batch_size, self.y_dim))
y_one_hot[np.arange(self.batch_size), y] = 1
z_sample = np.random.uniform(-1, 1, size=(self.batch_size,
self.z_dim)) # 100, 100
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: z_sample,
self.y: y_one_hot
})
save_matplot_img(
samples[:image_frame_dim * image_frame_dim, :, :, :],
[image_frame_dim, image_frame_dim], self.result_dir + '/' +
self.model_name + '_epoch%03d' % epoch + '_test_all_classes.png')
# MNIST
def visualize_results_MNIST(self, epoch):
tot_num_samples = min(self.sample_num, self.batch_size)
image_frame_dim = int(np.floor(np.sqrt(tot_num_samples)))
""" random condition, random noise """
y = np.random.choice(self.y_dim, self.batch_size)
y_one_hot = np.zeros((self.batch_size, self.y_dim))
y_one_hot[np.arange(self.batch_size), y] = 1
z_sample = np.random.uniform(-1, 1, size=(self.batch_size, self.z_dim))
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: z_sample,
self.y: y_one_hot
})
save_images(
samples[:image_frame_dim * image_frame_dim, :, :, :],
[image_frame_dim, image_frame_dim],
check_folder(self.result_dir + '/' + self.model_dir) + '/' +
self.model_name + '_epoch%03d' % epoch + '_test_all_classes.png')
""" specified condition, random noise """
n_styles = 10 # must be less than or equal to self.batch_size
np.random.seed()
si = np.random.choice(self.batch_size, n_styles)
for l in range(self.y_dim):
y = np.zeros(self.batch_size, dtype=np.int64) + l
y_one_hot = np.zeros((self.batch_size, self.y_dim))
y_one_hot[np.arange(self.batch_size), y] = 1
samples = self.sess.run(self.fake_images,
feed_dict={
self.z: z_sample,
self.y: y_one_hot
})
save_images(
samples[:image_frame_dim * image_frame_dim, :, :, :],
[image_frame_dim, image_frame_dim],
check_folder(self.result_dir + '/' + self.model_dir) + '/' +
self.model_name + '_epoch%03d' % epoch +
'_test_class_%d.png' % l)
samples = samples[si, :, :, :]
if l == 0:
all_samples = samples
else:
all_samples = np.concatenate((all_samples, samples), axis=0)
""" save merged images to check style-consistency """
canvas = np.zeros_like(all_samples)
for s in range(n_styles):
for c in range(self.y_dim):
canvas[s * self.y_dim +
c, :, :, :] = all_samples[c * n_styles + s, :, :, :]
save_images(
canvas, [n_styles, self.y_dim],
check_folder(self.result_dir + '/' + self.model_dir) + '/' +
self.model_name + '_epoch%03d' % epoch +
'_test_all_classes_style_by_style.png')
@property
def model_dir(self):
return "{}_{}_{}_{}".format(self.model_name, self.dataset_name,
self.batch_size, self.z_dim)
def save(self, checkpoint_dir, step):
checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir,
self.model_name)
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
self.saver.save(self.sess,
os.path.join(checkpoint_dir,
self.model_name + '.model'),
global_step=step)
def load(self, checkpoint_dir):
import re
print(" [*] Reading checkpoints...")
checkpoint_dir = os.path.join(checkpoint_dir, self.model_dir,
self.model_name)
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
self.saver.restore(self.sess,
os.path.join(checkpoint_dir, ckpt_name))
counter = int(
next(re.finditer("(\d+)(?!.*\d)", ckpt_name)).group(0))
print(" [*] Success to read {}".format(ckpt_name))
return True, counter
else:
print(" [*] Failed to find a checkpoint")
return False, 0
| Using TensorFlow backend.
| Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
gan.utils | """
Most codes from https://github.com/carpedm20/DCGAN-tensorflow
"""
from __future__ import division
import scipy.misc
import numpy as np
from six.moves import xrange
import matplotlib.pyplot as plt
import os, gzip
import tensorflow as tf
import tensorflow.contrib.slim as slim
from keras.datasets import cifar10
from keras.datasets import mnist
def one_hot(x, n):
"""
convert index representation to one-hot representation
"""
x = np.array(x)
assert x.ndim == 1
return np.eye(n)[x]
def prepare_input(data=None, labels=None):
image_height = 32
image_width = 32
image_depth = 3
assert (data.shape[1] == image_height * image_width * image_depth)
assert (data.shape[0] == labels.shape[0])
# do mean normalization across all samples
mu = np.mean(data, axis=0)
mu = mu.reshape(1, -1)
sigma = np.std(data, axis=0)
sigma = sigma.reshape(1, -1)
data = data - mu
data = data / sigma
is_nan = np.isnan(data)
is_inf = np.isinf(data)
if np.any(is_nan) or np.any(is_inf):
print('data is not well-formed : is_nan {n}, is_inf: {i}'.format(
n=np.any(is_nan), i=np.any(is_inf)))
# data is transformed from (no_of_samples, 3072) to (no_of_samples , image_height, image_width, image_depth)
# make sure the type of the data is no.float32
data = data.reshape([-1, image_depth, image_height, image_width])
data = data.transpose([0, 2, 3, 1])
data = data.astype(np.float32)
return data, labels
def read_cifar10(filename): # queue one element
class CIFAR10Record(object):
pass
result = CIFAR10Record()
label_bytes = 1 # 2 for CIFAR-100
result.height = 32
result.width = 32
result.depth = 3
data = np.load(filename, encoding='latin1')
value = np.asarray(data['data']).astype(np.float32)
labels = np.asarray(data['labels']).astype(np.int32)
return prepare_input(value, labels)
def load_cifar10(train):
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
if (train == True):
dataX = x_train.reshape([-1, 32, 32, 3])
dataY = y_train
else:
dataX = x_test.reshape([-1, 32, 32, 3])
dataY = y_test
seed = 547
np.random.seed(seed)
np.random.shuffle(dataX)
np.random.seed(seed)
np.random.shuffle(dataY)
y_vec = np.zeros((len(dataY), 10), dtype=np.float)
for i, label in enumerate(dataY):
y_vec[i, dataY[i]] = 1.0
return dataX / 255., y_vec
def load_mnist(train = True):
def extract_data(filename, num_data, head_size, data_size):
with gzip.open(filename) as bytestream:
bytestream.read(head_size)
buf = bytestream.read(data_size * num_data)
data = np.frombuffer(buf, dtype=np.uint8).astype(np.float)
return data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape((60000, 28, 28, 1))
y_train = y_train.reshape((60000))
x_test = x_test.reshape((10000, 28, 28, 1))
y_test = y_test.reshape((10000))
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
if (train == True):
seed = 547
np.random.seed(seed)
np.random.shuffle(x_train)
np.random.seed(seed)
np.random.shuffle(y_train)
y_vec = np.zeros((len(y_train), 10), dtype=np.float)
for i, label in enumerate(y_train):
y_vec[i, y_train[i]] = 1.0
return x_train / 255., y_vec
else:
seed = 547
np.random.seed(seed)
np.random.shuffle(x_test)
np.random.seed(seed)
np.random.shuffle(y_test)
y_vec = np.zeros((len(y_test), 10), dtype=np.float)
for i, label in enumerate(y_test):
y_vec[i, y_test[i]] = 1.0
return x_test / 255., y_vec
def check_folder(log_dir):
if not os.path.exists(log_dir):
os.makedirs(log_dir)
return log_dir
def show_all_variables():
model_vars = tf.trainable_variables()
slim.model_analyzer.analyze_vars(model_vars, print_info=True)
def get_image(image_path,
input_height,
input_width,
resize_height=64,
resize_width=64,
crop=True,
grayscale=False):
image = imread(image_path, grayscale)
return transform(image, input_height, input_width, resize_height,
resize_width, crop)
def save_images(images, size, image_path):
return imsave(inverse_transform(images), size, image_path)
def imread(path, grayscale=False):
if (grayscale):
return scipy.misc.imread(path, flatten=True).astype(np.float)
else:
return scipy.misc.imread(path).astype(np.float)
def merge_images(images, size):
return inverse_transform(images)
def merge(images, size):
h, w = images.shape[1], images.shape[2]
if (images.shape[3] in (3, 4)):
c = images.shape[3]
img = np.zeros((h * size[0], w * size[1], c))
for idx, image in enumerate(images):
i = idx % size[1]
j = idx // size[1]
img[j * h:j * h + h, i * w:i * w + w, :] = image
return img
elif images.shape[3] == 1:
img = np.zeros((h * size[0], w * size[1]))
for idx, image in enumerate(images):
i = idx % size[1]
j = idx // size[1]
img[j * h:j * h + h, i * w:i * w + w] = image[:, :, 0]
return img
else:
raise ValueError('in merge(images,size) images parameter '
'must have dimensions: HxW or HxWx3 or HxWx4')
def imsave(images, size, path):
image = np.squeeze(merge(images, size))
return scipy.misc.imsave(path, image)
def center_crop(x, crop_h, crop_w, resize_h=64, resize_w=64):
if crop_w is None:
crop_w = crop_h
h, w = x.shape[:2]
j = int(round((h - crop_h) / 2.))
i = int(round((w - crop_w) / 2.))
return scipy.misc.imresize(x[j:j + crop_h, i:i + crop_w],
[resize_h, resize_w])
def transform(image,
input_height,
input_width,
resize_height=64,
resize_width=64,
crop=True):
if crop:
cropped_image = center_crop(image, input_height, input_width,
resize_height, resize_width)
else:
cropped_image = scipy.misc.imresize(image,
[resize_height, resize_width])
return np.array(cropped_image) / 127.5 - 1.
def inverse_transform(images):
return (images + 1.) / 2.
""" Drawing Tools """
# borrowed from https://github.com/ykwon0407/variational_autoencoder/blob/master/variational_bayes.ipynb
def save_scattered_image(z,
id,
z_range_x,
z_range_y,
name='scattered_image.jpg'):
N = 10
plt.figure(figsize=(8, 6))
plt.scatter(z[:, 0],
z[:, 1],
c=np.argmax(id, 1),
marker='o',
edgecolor='none',
cmap=discrete_cmap(N, 'jet'))
plt.colorbar(ticks=range(N))
axes = plt.gca()
axes.set_xlim([-z_range_x, z_range_x])
axes.set_ylim([-z_range_y, z_range_y])
plt.grid(True)
plt.savefig(name)
# borrowed from https://gist.github.com/jakevdp/91077b0cae40f8f8244a
def discrete_cmap(N, base_cmap=None):
"""Create an N-bin discrete colormap from the specified input map"""
# Note that if base_cmap is a string or None, you can simply do
# return plt.cm.get_cmap(base_cmap, N)
# The following works for string, None, or a colormap instance:
base = plt.cm.get_cmap(base_cmap)
color_list = base(np.linspace(0, 1, N))
cmap_name = base.name + str(N)
return base.from_list(cmap_name, color_list, N)
def save_matplot_img(images, size, image_path):
# revice image data // M*N*3 // RGB float32 : value must set between 0. with 1.
for idx in range(64):
vMin = np.amin(images[idx])
vMax = np.amax(images[idx])
img_arr = images[idx].reshape(32 * 32 * 3, 1) # flatten
for i, v in enumerate(img_arr):
img_arr[i] = (v - vMin) / (vMax - vMin)
img_arr = img_arr.reshape(32, 32, 3) # M*N*3
plt.subplot(8, 8, idx + 1), plt.imshow(img_arr,
interpolation='nearest')
plt.axis("off")
plt.savefig(image_path)
| _____no_output_____ | Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
Main | import tensorflow as tf
import os
base_dir = "./"
out_dir = base_dir + "mnist_clip1_sigma0.6_lr0.55"
if not os.path.exists(out_dir):
os.mkdir(out_dir)
gpu_options = tf.GPUOptions(visible_device_list="0")
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True,
gpu_options=gpu_options)) as sess:
epoch = 100
cgan = OUR_DP_CGAN(sess,
epoch=epoch,
batch_size=64,
z_dim=100,
epsilon=9.6,
delta=1e-5,
sigma=0.6,
clip_value=1,
lr=0.055,
dataset_name='mnist',
checkpoint_dir=out_dir + "/checkpoint/",
result_dir=out_dir + "/results/",
log_dir=out_dir + "/logs/",
base_dir=base_dir)
cgan.build_model()
print(" [*] Building model finished!")
show_all_variables()
cgan.train()
print(" [*] Training finished!")
| Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
[*] Building model finished!
---------
Variables: name (type shape) [size]
---------
discriminator/d_conv1/w:0 (float32_ref 4x4x11x64) [11264, bytes: 45056]
discriminator/d_conv1/biases:0 (float32_ref 64) [64, bytes: 256]
discriminator/d_conv2/w:0 (float32_ref 4x4x64x128) [131072, bytes: 524288]
discriminator/d_conv2/biases:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn2/beta:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn2/gamma:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_fc3/Matrix:0 (float32_ref 6272x1024) [6422528, bytes: 25690112]
discriminator/d_fc3/bias:0 (float32_ref 1024) [1024, bytes: 4096]
discriminator/d_bn3/beta:0 (float32_ref 1024) [1024, bytes: 4096]
discriminator/d_bn3/gamma:0 (float32_ref 1024) [1024, bytes: 4096]
discriminator/d_fc4/Matrix:0 (float32_ref 1024x1) [1024, bytes: 4096]
discriminator/d_fc4/bias:0 (float32_ref 1) [1, bytes: 4]
generator/g_fc1/Matrix:0 (float32_ref 110x1024) [112640, bytes: 450560]
generator/g_fc1/bias:0 (float32_ref 1024) [1024, bytes: 4096]
generator/g_bn1/beta:0 (float32_ref 1024) [1024, bytes: 4096]
generator/g_bn1/gamma:0 (float32_ref 1024) [1024, bytes: 4096]
generator/g_fc2/Matrix:0 (float32_ref 1024x6272) [6422528, bytes: 25690112]
generator/g_fc2/bias:0 (float32_ref 6272) [6272, bytes: 25088]
generator/g_bn2/beta:0 (float32_ref 6272) [6272, bytes: 25088]
generator/g_bn2/gamma:0 (float32_ref 6272) [6272, bytes: 25088]
generator/g_dc3/w:0 (float32_ref 4x4x64x128) [131072, bytes: 524288]
generator/g_dc3/biases:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/beta:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/gamma:0 (float32_ref 64) [64, bytes: 256]
generator/g_dc4/w:0 (float32_ref 4x4x1x64) [1024, bytes: 4096]
generator/g_dc4/biases:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 13258754
Total bytes of variables: 53035016
[*] Reading checkpoints...
[*] Failed to find a checkpoint
[!] Load failed...
Iteration : 98 Eps: 5.750369579644922
| Apache-2.0 | Advanced_DP_CGAN/AdvancedDPCGAN.ipynb | reihaneh-torkzadehmahani/DP-CGAN |
URL: http://matplotlib.org/examples/pylab_examples/quiver_demo.htmlMost examples work across multiple plotting backends, this example is also available for:* [Matplotlib - quiver_demo](../matplotlib/quiver_demo.ipynb) | import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh') | _____no_output_____ | BSD-3-Clause | examples/gallery/demos/bokeh/quiver_demo.ipynb | ppwadhwa/holoviews |
Define data | xs, ys = np.arange(0, 2 * np.pi, .2), np.arange(0, 2 * np.pi, .2)
X, Y = np.meshgrid(xs, ys)
U = np.cos(X)
V = np.sin(Y)
# Convert to magnitude and angle
mag = np.sqrt(U**2 + V**2)
angle = (np.pi/2.) - np.arctan2(U/mag, V/mag) | _____no_output_____ | BSD-3-Clause | examples/gallery/demos/bokeh/quiver_demo.ipynb | ppwadhwa/holoviews |
Plot | label = 'Arrows scale with plot width, not view'
opts.defaults(opts.VectorField(height=400, width=500))
vectorfield = hv.VectorField((xs, ys, angle, mag))
vectorfield.relabel(label)
label = "pivot='mid'; every third arrow"
vf_mid = hv.VectorField((xs[::3], ys[::3], angle[::3, ::3], mag[::3, ::3], ))
points = hv.Points((X[::3, ::3].flat, Y[::3, ::3].flat))
opts.defaults(opts.Points(color='red'))
(vf_mid * points).relabel(label)
label = "pivot='tip'; scales with x view"
vectorfield = hv.VectorField((xs, ys, angle, mag))
points = hv.Points((X.flat, Y.flat))
(points * vectorfield).opts(
opts.VectorField(magnitude='Magnitude', color='Magnitude',
pivot='tip', line_width=2, title=label)) | _____no_output_____ | BSD-3-Clause | examples/gallery/demos/bokeh/quiver_demo.ipynb | ppwadhwa/holoviews |
**PINN eikonal solver for a portion of the Marmousi model** | from google.colab import drive
drive.mount('/content/gdrive')
cd "/content/gdrive/My Drive/Colab Notebooks/Codes/PINN_isotropic_eikonal_R1"
!pip install sciann==0.5.4.0
!pip install tensorflow==2.2.0
#!pip install keras==2.3.1
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import tensorflow as tf
from sciann import Functional, Variable, SciModel, PDE
from sciann.utils import *
import scipy.io
import time
import random
from mpl_toolkits.axes_grid1.inset_locator import zoomed_inset_axes
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
tf.config.threading.set_intra_op_parallelism_threads(1)
tf.config.threading.set_inter_op_parallelism_threads(1)
np.random.seed(123)
tf.random.set_seed(123)
# Loading velocity model
filename="./inputs/marm/model/marm_vz.txt"
marm = pd.read_csv(filename, index_col=None, header=None)
velmodel = np.reshape(np.array(marm), (101, 101)).T
# Loading reference solution
filename="./inputs/marm/traveltimes/fmm_or2_marm_s(1,1).txt"
T_data = pd.read_csv(filename, index_col=None, header=None)
T_data = np.reshape(np.array(T_data), (101, 101)).T
#Model specifications
zmin = 0.; zmax = 2.; deltaz = 0.02;
xmin = 0.; xmax = 2.; deltax = 0.02;
# Point-source location
sz = 1.0; sx = 1.0;
# Number of training points
num_tr_pts = 3000
# Creating grid, calculating refrence traveltimes, and prepare list of grid points for training (X_star)
z = np.arange(zmin,zmax+deltaz,deltaz)
nz = z.size
x = np.arange(xmin,xmax+deltax,deltax)
nx = x.size
Z,X = np.meshgrid(z,x,indexing='ij')
X_star = [Z.reshape(-1,1), X.reshape(-1,1)]
selected_pts = np.random.choice(np.arange(Z.size),num_tr_pts,replace=False)
Zf = Z.reshape(-1,1)[selected_pts]
Zf = np.append(Zf,sz)
Xf = X.reshape(-1,1)[selected_pts]
Xf = np.append(Xf,sx)
X_starf = [Zf.reshape(-1,1), Xf.reshape(-1,1)]
# Plot the velocity model with the source location
plt.style.use('default')
plt.figure(figsize=(4,4))
ax = plt.gca()
im = ax.imshow(velmodel, extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet")
ax.plot(sx,sz,'k*',markersize=8)
plt.xlabel('Offset (km)', fontsize=14)
plt.xticks(fontsize=10)
plt.ylabel('Depth (km)', fontsize=14)
plt.yticks(fontsize=10)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="6%", pad=0.15)
cbar = plt.colorbar(im, cax=cax)
cbar.set_label('km/s',size=10)
cbar.ax.tick_params(labelsize=10)
plt.savefig("./figs/marm/velmodel.pdf", format='pdf', bbox_inches="tight")
# Analytical solution for the known traveltime part
vel = velmodel[int(round(sz/deltaz)),int(round(sx/deltax))] # Velocity at the source location
T0 = np.sqrt((Z-sz)**2 + (X-sx)**2)/vel;
px0 = np.divide(X-sx, T0*vel**2, out=np.zeros_like(T0), where=T0!=0)
pz0 = np.divide(Z-sz, T0*vel**2, out=np.zeros_like(T0), where=T0!=0)
# Find source location id in X_star
TOLX = 1e-6
TOLZ = 1e-6
sids,_ = np.where(np.logical_and(np.abs(X_starf[0]-sz)<TOLZ , np.abs(X_starf[1]-sx)<TOLX))
print(sids)
print(sids.shape)
print(X_starf[0][sids,0])
print(X_starf[1][sids,0])
# Preparing the Sciann model object
K.clear_session()
layers = [20]*10
# Appending source values
velmodelf = velmodel.reshape(-1,1)[selected_pts]; velmodelf = np.append(velmodelf,vel)
px0f = px0.reshape(-1,1)[selected_pts]; px0f = np.append(px0f,0.)
pz0f = pz0.reshape(-1,1)[selected_pts]; pz0f = np.append(pz0f,0.)
T0f = T0.reshape(-1,1)[selected_pts]; T0f = np.append(T0f,0.)
xt = Variable("xt",dtype='float64')
zt = Variable("zt",dtype='float64')
vt = Variable("vt",dtype='float64')
px0t = Variable("px0t",dtype='float64')
pz0t = Variable("pz0t",dtype='float64')
T0t = Variable("T0t",dtype='float64')
tau = Functional("tau", [zt, xt], layers, 'l-atan')
# Loss function based on the factored isotropic eikonal equation
L = (T0t*diff(tau, xt) + tau*px0t)**2 + (T0t*diff(tau, zt) + tau*pz0t)**2 - 1.0/vt**2
targets = [tau, 20*L, (1-sign(tau*T0t))*abs(tau*T0t)]
target_vals = [(sids, np.ones(sids.shape).reshape(-1,1)), 'zeros', 'zeros']
model = SciModel(
[zt, xt, vt, pz0t, px0t, T0t],
targets,
load_weights_from='models/vofz_model-end.hdf5',
optimizer='scipy-l-BFGS-B'
)
#Model training
start_time = time.time()
hist = model.train(
X_starf + [velmodelf,pz0f,px0f,T0f],
target_vals,
batch_size = X_starf[0].size,
epochs = 12000,
learning_rate = 0.008,
verbose=0
)
elapsed = time.time() - start_time
print('Training time: %.2f seconds' %(elapsed))
# Convergence history plot for verification
fig = plt.figure(figsize=(5,3))
ax = plt.axes()
#ax.semilogy(np.arange(0,300,0.001),hist.history['loss'],LineWidth=2)
ax.semilogy(hist.history['loss'],LineWidth=2)
ax.set_xlabel('Epochs (x $10^3$)',fontsize=16)
plt.xticks(fontsize=12)
#ax.xaxis.set_major_locator(plt.MultipleLocator(50))
ax.set_ylabel('Loss',fontsize=16)
plt.yticks(fontsize=12);
plt.grid()
# Predicting traveltime solution from the trained model
L_pred = L.eval(model, X_star + [velmodel,pz0,px0,T0])
tau_pred = tau.eval(model, X_star + [velmodel,pz0,px0,T0])
tau_pred = tau_pred.reshape(Z.shape)
T_pred = tau_pred*T0
print('Time at source: %.4f'%(tau_pred[int(round(sz/deltaz)),int(round(sx/deltax))]))
# Plot the PINN solution error
plt.style.use('default')
plt.figure(figsize=(4,4))
ax = plt.gca()
im = ax.imshow(np.abs(T_pred-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet")
plt.xlabel('Offset (km)', fontsize=14)
plt.xticks(fontsize=10)
plt.ylabel('Depth (km)', fontsize=14)
plt.yticks(fontsize=10)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="6%", pad=0.15)
cbar = plt.colorbar(im, cax=cax)
cbar.set_label('seconds',size=10)
cbar.ax.tick_params(labelsize=10)
plt.savefig("./figs/marm/pinnerror.pdf", format='pdf', bbox_inches="tight")
# Load fast sweeping traveltims for comparison
T_fsm = np.load('./inputs/marm/traveltimes/Tcomp.npy')
# Plot the first order FMM solution error
plt.style.use('default')
plt.figure(figsize=(4,4))
ax = plt.gca()
im = ax.imshow(np.abs(T_fsm-T_data), extent=[xmin,xmax,zmax,zmin], aspect=1, cmap="jet")
plt.xlabel('Offset (km)', fontsize=14)
plt.xticks(fontsize=10)
plt.ylabel('Depth (km)', fontsize=14)
plt.yticks(fontsize=10)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="6%", pad=0.15)
cbar = plt.colorbar(im, cax=cax)
cbar.set_label('seconds',size=10)
cbar.ax.tick_params(labelsize=10)
plt.savefig("./figs/marm/fmm1error.pdf", format='pdf', bbox_inches="tight")
# Traveltime contour plots
fig = plt.figure(figsize=(5,5))
ax = plt.gca()
im1 = ax.contour(T_data, 6, extent=[xmin,xmax,zmin,zmax], colors='r')
im2 = ax.contour(T_pred, 6, extent=[xmin,xmax,zmin,zmax], colors='k',linestyles = 'dashed')
im3 = ax.contour(T_fsm, 6, extent=[xmin,xmax,zmin,zmax], colors='b',linestyles = 'dotted')
ax.plot(sx,sz,'k*',markersize=8)
plt.xlabel('Offset (km)', fontsize=14)
plt.ylabel('Depth (km)', fontsize=14)
ax.tick_params(axis='both', which='major', labelsize=8)
plt.gca().invert_yaxis()
h1,_ = im1.legend_elements()
h2,_ = im2.legend_elements()
h3,_ = im3.legend_elements()
ax.legend([h1[0], h2[0], h3[0]], ['Reference', 'PINN', 'Fast sweeping'],fontsize=12)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.5))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.5))
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
#ax.arrow(1.9, 1.7, -0.1, -0.1, head_width=0.05, head_length=0.075, fc='red', ec='red',width=0.02)
plt.savefig("./figs/marm/contours.pdf", format='pdf', bbox_inches="tight")
print(np.linalg.norm(T_pred-T_data)/np.linalg.norm(T_data))
print(np.linalg.norm(T_pred-T_data)) | 0.005852164264049101
0.19346281698941364
| MIT | codes/script5.ipynb | umairbinwaheed/PINN_eikonal |
GRIP June'21 - The Sparks Foundation Data Science and Business Analytics Author: Smriti Gupta Task 1: **Prediction using Supervised ML** * Predict the percentage of an student based on the no. of study hours. * What will be predicted score if a student studies for 9.25 hrs/ day? * _LANGUAGE:_ Python* _DATASET:_ http://bit.ly/w-data | # Importing Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config Completer.use_jedi = False
# Reading data from remote link
url = "https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv"
df = pd.read_csv(url)
# Viewing the Data
df.head(10)
# Shape of the Dataset
df.shape
# Checking the information of Data
df.info()
# Checking the statistical details of Data
df.describe()
# Checking the correlation between Hours and Scores
corr = df.corr()
corr
colors = ['#670067','#008080'] | _____no_output_____ | MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
Data Visualization | # 2-D graph to establish relationship between the Data and checking for linearity
sns.set_style('darkgrid')
df.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Percentage')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show() | _____no_output_____ | MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
Data Preprocessing | X = df.iloc[:, :-1].values
y = df.iloc[:, 1].values | _____no_output_____ | MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
LINEAR REGRESSION MODEL Splitting Dataset into training and test sets: | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) | _____no_output_____ | MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
Training the Model | from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
print('TRAINING COMPLETED.') | TRAINING COMPLETED.
| MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
Predicting the Score | y_predict = regressor.predict(X_test)
prediction = pd.DataFrame({'Hours': [i[0] for i in X_test], 'Predicted Scores': [k for k in y_predict]})
prediction
print(regressor.intercept_)
print(regressor.coef_)
# Plotting the regression line
line = regressor.coef_*X+regressor.intercept_
# Plotting for the test data
plt.scatter(X, y, color = colors[1])
plt.plot(X, line, color = colors[0]);
plt.title('Hours vs Percentage')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show() | _____no_output_____ | MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
Checking the Accuracy Scores for training and test set | print('Test Score')
print(regressor.score(X_test, y_test))
print('Training Score')
print(regressor.score(X_train, y_train)) | Test Score
0.9480612939203932
Training Score
0.953103139564599
| MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
Comparing Actual Scores and Predicted Scores | data= pd.DataFrame({'Actual': y_test,'Predicted': y_predict})
data
# Visualization comparing Actual Scores and Predicted Scores
plt.scatter(X_test, y_test, color = colors[1])
plt.plot(X_test, y_predict, color = colors[0])
plt.title("Hours Studied Vs Percentage (Test Dataset)")
plt.xlabel("Hour")
plt.ylabel("Percentage")
plt.show() | _____no_output_____ | MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
Model Evaluation Metrics | #Checking the efficiency of model
from sklearn import metrics
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_predict))
print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_predict))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_predict))) | Mean Absolute Error: 4.451593449522768
Mean Squared Error: 25.188194900366135
Root Mean Squared Error: 5.018784205399365
| MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
What will be predicted score if a student studies for 9.25 hrs/ day? | hours = 9.25
ans = regressor.predict([[hours]])
print("No of Hours = {}".format(hours))
print("Predicted Score = {}".format(ans[0])) | No of Hours = 9.25
Predicted Score = 93.71476919815156
| MIT | Task 1- Prediction using Supervised ML/Linear Regression.ipynb | smritig19/GRIP_Internship_Tasks |
import pandas as pd
import time
import os.path
import warnings
warnings.filterwarnings('ignore')
# install DenMune clustering algorithm using pip command from the offecial Python repository, PyPi
# from https://pypi.org/project/denmune/
!pip install denmune
# now import it
from denmune import DenMune
# clone datasets from our repository datasets
if not os.path.exists('datasets'):
!git clone https://github.com/egy1st/datasets
data_path = 'datasets/denmune/chameleon/'
datasets = ["t7.10k", "t4.8k", "t5.8k", "t8.8k"]
for dataset in datasets:
data_file = data_path + dataset + '.csv'
X_train = pd.read_csv(data_file, sep=',', header=None)
dm = DenMune(train_data=X_train, k_nearest=39, rgn_tsne=False)
labels, validity = dm.fit_predict(show_noise=True, show_analyzer=True)
dataset = "clusterable"
data_file = data_path + dataset + '.csv'
X_train = pd.read_csv(data_file, sep=',', header=None)
dm = DenMune(train_data=X_train, k_nearest=24, rgn_tsne=False)
labels, validity = dm.fit_predict(show_noise=True, show_analyzer=True)
| Plotting train data
| BSD-3-Clause | kaggle/detecting-non-groundtruth-datasets.ipynb | fossabot/denmune-clustering-algorithm | |
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
!git clone https://github.com/yohanesnuwara/ccs-gundih | Cloning into 'ccs-gundih'...
remote: Enumerating objects: 94, done.[K
remote: Counting objects: 1% (1/94)[K
remote: Counting objects: 2% (2/94)[K
remote: Counting objects: 3% (3/94)[K
remote: Counting objects: 4% (4/94)[K
remote: Counting objects: 5% (5/94)[K
remote: Counting objects: 6% (6/94)[K
remote: Counting objects: 7% (7/94)[K
remote: Counting objects: 8% (8/94)[K
remote: Counting objects: 9% (9/94)[K
remote: Counting objects: 10% (10/94)[K
remote: Counting objects: 11% (11/94)[K
remote: Counting objects: 12% (12/94)[K
remote: Counting objects: 13% (13/94)[K
remote: Counting objects: 14% (14/94)[K
remote: Counting objects: 15% (15/94)[K
remote: Counting objects: 17% (16/94)[K
remote: Counting objects: 18% (17/94)[K
remote: Counting objects: 19% (18/94)[K
remote: Counting objects: 20% (19/94)[K
remote: Counting objects: 21% (20/94)[K
remote: Counting objects: 22% (21/94)[K
remote: Counting objects: 23% (22/94)[K
remote: Counting objects: 24% (23/94)[K
remote: Counting objects: 25% (24/94)[K
remote: Counting objects: 26% (25/94)[K
remote: Counting objects: 27% (26/94)[K
remote: Counting objects: 28% (27/94)[K
remote: Counting objects: 29% (28/94)[K
remote: Counting objects: 30% (29/94)[K
remote: Counting objects: 31% (30/94)[K
remote: Counting objects: 32% (31/94)[K
remote: Counting objects: 34% (32/94)[K
remote: Counting objects: 35% (33/94)[K
remote: Counting objects: 36% (34/94)[K
remote: Counting objects: 37% (35/94)[K
remote: Counting objects: 38% (36/94)[K
remote: Counting objects: 39% (37/94)[K
remote: Counting objects: 40% (38/94)[K
remote: Counting objects: 41% (39/94)[K
remote: Counting objects: 42% (40/94)[K
remote: Counting objects: 43% (41/94)[K
remote: Counting objects: 44% (42/94)[K
remote: Counting objects: 45% (43/94)[K
remote: Counting objects: 46% (44/94)[K
remote: Counting objects: 47% (45/94)[K
remote: Counting objects: 48% (46/94)[K
remote: Counting objects: 50% (47/94)[K
remote: Counting objects: 51% (48/94)[K
remote: Counting objects: 52% (49/94)[K
remote: Counting objects: 53% (50/94)[K
remote: Counting objects: 54% (51/94)[K
remote: Counting objects: 55% (52/94)[K
remote: Counting objects: 56% (53/94)[K
remote: Counting objects: 57% (54/94)[K
remote: Counting objects: 58% (55/94)[K
remote: Counting objects: 59% (56/94)[K
remote: Counting objects: 60% (57/94)[K
remote: Counting objects: 61% (58/94)[K
remote: Counting objects: 62% (59/94)[K
remote: Counting objects: 63% (60/94)[K
remote: Counting objects: 64% (61/94)[K
remote: Counting objects: 65% (62/94)[K
remote: Counting objects: 67% (63/94)[K
remote: Counting objects: 68% (64/94)[K
remote: Counting objects: 69% (65/94)[K
remote: Counting objects: 70% (66/94)[K
remote: Counting objects: 71% (67/94)[K
remote: Counting objects: 72% (68/94)[K
remote: Counting objects: 73% (69/94)[K
remote: Counting objects: 74% (70/94)[K
remote: Counting objects: 75% (71/94)[K
remote: Counting objects: 76% (72/94)[K
remote: Counting objects: 77% (73/94)[K
remote: Counting objects: 78% (74/94)[K
remote: Counting objects: 79% (75/94)[K
remote: Counting objects: 80% (76/94)[K
remote: Counting objects: 81% (77/94)[K
remote: Counting objects: 82% (78/94)[K
remote: Counting objects: 84% (79/94)[K
remote: Counting objects: 85% (80/94)[K
remote: Counting objects: 86% (81/94)[K
remote: Counting objects: 87% (82/94)[K
remote: Counting objects: 88% (83/94)[K
remote: Counting objects: 89% (84/94)[K
remote: Counting objects: 90% (85/94)[K
remote: Counting objects: 91% (86/94)[K
remote: Counting objects: 92% (87/94)[K
remote: Counting objects: 93% (88/94)[K
remote: Counting objects: 94% (89/94)[K
remote: Counting objects: 95% (90/94)[K
remote: Counting objects: 96% (91/94)[K
remote: Counting objects: 97% (92/94)[K
remote: Counting objects: 98% (93/94)[K
remote: Counting objects: 100% (94/94)[K
remote: Counting objects: 100% (94/94), done.[K
remote: Compressing objects: 100% (94/94), done.[K
remote: Total 259 (delta 55), reused 0 (delta 0), pack-reused 165[K
Receiving objects: 100% (259/259), 14.37 MiB | 26.09 MiB/s, done.
Resolving deltas: 100% (136/136), done.
| MIT | main/00_gundih_historical_production_data.ipynb | yohanesnuwara/bsc-thesis-carbon-capture-storage | |
Visualize the Historical Production Data | # Read simulation result
col = np.array(['Date', 'Days', 'WSAT', 'OSAT', 'GSAT', 'GMT', 'OMR', 'GMR', 'GCDI', 'GCDM', 'WCD',
'WGR', 'WCT', 'VPR', 'VPT', 'VIR', 'VIT', 'WPR', 'OPR', 'GPR', 'WPT', 'OPT', 'GPT',
'PR', 'GIR', 'GIT', 'GOR'])
case1 = pd.read_excel(r'/content/ccs-gundih/data/CASE_1.xlsx'); case1 = pd.DataFrame(case1, columns=col) #INJ-2, 15 MMSCFD, kv/kh = 0.1
case2 = pd.read_excel(r'/content/ccs-gundih/data/CASE_2.xlsx'); case2 = pd.DataFrame(case2, columns=col) #INJ-2, 15 MMSCFD, kv/kh = 0.5
case1.head(5)
case2.head(5)
# convert to Panda datetime
date = pd.to_datetime(case1['Date'])
# gas production cumulative
GPT1 = case1['GPT'] * 35.31 * 1E-06 # convert from m3 to ft3 then to mmscf
GPT2 = case2['GPT'] * 35.31 * 1E-06
# gas production rate
GPR1 = case1['GPR'] * 35.31 * 1E-06
GPR2 = case2['GPR'] * 35.31 * 1E-06
# average pressure
PR1 = case1['PR'] * 14.5038 # convert from bar to psi
PR2 = case2['PR'] * 14.5038
# plot gas production and pressure data from 2014-01-01 to 2016-01-01
pd.plotting.register_matplotlib_converters()
plt.figure(figsize=(12, 7))
plt.plot(date, GPR1, color='blue')
plt.plot(date, GPR2, color='red')
plt.xlabel("Date"); plt.ylabel("Cumulative Gas Production (MMscf)")
plt.title("Cumulative Gas Production from 01/01/2014 to 01/01/2019", pad=20, size=15)
plt.xlim('2014-01-01', '2019-01-01')
# plt.ylim(0, 50000)
# plot gas production and pressure data from 2014-01-01 to 2016-01-01
pd.plotting.register_matplotlib_converters()
fig = plt.figure()
fig = plt.figure(figsize=(12,7))
host = fig.add_subplot(111)
par1 = host.twinx()
par2 = host.twinx()
host.set_xlabel("Year")
host.set_ylabel("Cumulative Gas Production (MMscf)")
par1.set_ylabel("Gas Production Rate (MMscfd)")
par2.set_ylabel("Average Reservoir Pressure (psi)")
host.set_title("Historical Production Data of Gundih Field from 2014 to 2019", pad=20, size=15)
color1 = plt.cm.viridis(0)
color2 = plt.cm.viridis(.5)
color3 = plt.cm.viridis(.8)
p1, = host.plot(date, GPR1, color=color1,label="Gas production rate (MMscfd)")
p2, = par1.plot(date, GPT1, color=color2, label="Cumulative gas production (MMscf)")
p3, = par2.plot(date, PR1, color=color3, label="Average Pressure (psi)")
host.set_xlim('2014-05-01', '2019-01-01')
host.set_ylim(ymin=0)
par1.set_ylim(0, 40000)
par2.set_ylim(3400, 4100)
lns = [p1, p2, p3]
host.legend(handles=lns, loc='best')
# right, left, top, bottom
par2.spines['right'].set_position(('outward', 60))
plt.savefig('/content/ccs-gundih/result/production_curve') | _____no_output_____ | MIT | main/00_gundih_historical_production_data.ipynb | yohanesnuwara/bsc-thesis-carbon-capture-storage |
Dry-Gas Reservoir Analysis | !git clone https://github.com/yohanesnuwara/reservoir-engineering
# calculate gas z factor and FVF
import os, sys
sys.path.append('/content/reservoir-engineering/Unit 2 Review of Rock and Fluid Properties/functions')
from pseudoprops import pseudoprops
from dranchuk_aboukassem import dranchuk
from gasfvf import gasfvf
temp_f = (temp * 9/5) + 32 # Rankine
pressure = np.array(PR1)
z_arr = []
Bg_arr = []
for i in range(len(pressure)):
P_pr, T_pr = pseudoprops(temp_f, pressure[i], 0.8, 0.00467, 0.23)
rho_pr, z = dranchuk(T_pr, P_pr)
temp_r = temp_f + 459.67
Bg = 0.0282793 * z * temp_r / pressure[i] # Eq 2.2, temp in Rankine, p in psia, result in res ft3/scf
z_arr.append(float(z))
Bg_arr.append(float(Bg))
Bg_arr = np.array(Bg_arr)
F = GPT1 * Bg_arr # MMscf
Eg = Bg_arr - Bg_arr[0]
F_Eg = F / Eg
plt.figure(figsize=(10,7))
plt.plot(GPT1, (F_Eg / 1E+03), '.', color='red') # convert F_Eg from MMscf to Bscf
plt.xlim(xmin=0); plt.ylim(ymin=0)
plt.title("Waterdrive Diagnostic Plot of $F/E_g$ vs. $G_p$", pad=20, size=15)
plt.xlabel('Cumulative Gas Production (MMscf)')
plt.ylabel('$F/E_g$ (Bscf)')
plt.ylim(ymin=250)
date_hist = date.iloc[:1700]
GPT1_hist = GPT1.iloc[:1700]
Bg_arr = np.array(Bg_arr)
Bg_arr_hist = Bg_arr[:1700]
F_hist = GPT1_hist * Bg_arr_hist # MMscf
Eg_hist = Bg_arr_hist - Bg_arr_hist[0]
F_Eg_hist = F_hist / Eg_hist
plt.figure(figsize=(10,7))
plt.plot(GPT1_hist, (F_Eg_hist / 1E+03), '.')
plt.title("Waterdrive Diagnostic Plot of $F/E_g$ vs. $G_p$", pad=20, size=15)
plt.xlabel('Cumulative Gas Production (MMscf)')
plt.ylabel('$F/E_g$ (Bscf)')
plt.xlim(xmin=0); plt.ylim(300, 350)
p_z = PR1 / z_arr
plt.plot(GPT1, p_z, '.') | _____no_output_____ | MIT | main/00_gundih_historical_production_data.ipynb | yohanesnuwara/bsc-thesis-carbon-capture-storage |
1A.1 - Deviner un nombre aléatoire (correction)On reprend la fonction introduite dans l'énoncé et qui permet de saisir un nombre. | import random
nombre = input("Entrez un nombre")
nombre | _____no_output_____ | MIT | _doc/notebooks/td1a/pp_exo_deviner_un_nombre_correction.ipynb | Jerome-maker/ensae_teaching_cs |
**Q1 :** Ecrire une jeu dans lequel python choisi aléatoirement un nombre entre 0 et 100, et essayer de trouver ce nombre en 10 étapes. | n = random.randint(0,100)
appreciation = "?"
while True:
var = input("Entrez un nombre")
var = int(var)
if var < n :
appreciation = "trop bas"
print(var, appreciation)
else :
appreciation = "trop haut"
print(var, appreciation)
if var == n:
appreciation = "bravo !"
print(var, appreciation)
break | Entrez un nombre7
7 trop bas
Entrez un nombre10
10 trop bas
Entrez un nombre100
100 trop haut
Entrez un nombre50
50 trop bas
Entrez un nombre75
75 trop haut
Entrez un nombre60
60 trop haut
Entrez un nombre55
55 trop haut
Entrez un nombre53
53 trop haut
Entrez un nombre52
52 trop haut
Entrez un nombre51
51 trop haut
51 bravo !
| MIT | _doc/notebooks/td1a/pp_exo_deviner_un_nombre_correction.ipynb | Jerome-maker/ensae_teaching_cs |
**Q2 :** Transformer ce jeu en une fonction ``jeu(nVies)`` où ``nVies`` est le nombre d'itérations maximum. | import random
n = random.randint(0,100)
vies = 10
appreciation = "?"
while vies > 0:
var = input("Entrez un nombre")
var = int(var)
if var < n :
appreciation = "trop bas"
print(vies, var, appreciation)
else :
appreciation = "trop haut"
print(vies, var, appreciation)
if var == n:
appreciation = "bravo !"
print(vies, var, appreciation)
break
vies -= 1 | Entrez un nombre50
10 50 trop bas
Entrez un nombre75
9 75 trop haut
Entrez un nombre60
8 60 trop bas
Entrez un nombre65
7 65 trop bas
Entrez un nombre70
6 70 trop bas
Entrez un nombre73
5 73 trop haut
Entrez un nombre72
4 72 trop haut
Entrez un nombre71
3 71 trop haut
3 71 bravo !
| MIT | _doc/notebooks/td1a/pp_exo_deviner_un_nombre_correction.ipynb | Jerome-maker/ensae_teaching_cs |
**Q3 :** Adapter le code pour faire une classe joueur avec une méthode jouer, où un joueur est défini par un pseudo et son nombre de vies. Faire jouer deux joueurs et déterminer le vainqueur. | class joueur:
def __init__(self, vies, pseudo):
self.vies = vies
self.pseudo = pseudo
def jouer(self):
appreciation = "?"
n = random.randint(0,100)
while self.vies > 0:
message = appreciation + " -- " + self.pseudo + " : " + str(self.vies) + " vies restantes. Nombre choisi : "
var = input(message)
var = int(var)
if var < n :
appreciation = "trop bas"
print(vies, var, appreciation)
else :
appreciation = "trop haut"
print(vies, var, appreciation)
if var == n:
appreciation = "bravo !"
print(vies, var, appreciation)
break
self.vies -= 1
# Initialisation des deux joueurs
j1 = joueur(10, "joueur 1")
j2 = joueur(10, "joueur 2")
# j1 et j2 jouent
j1.jouer()
j2.jouer()
# Nombre de vies restantes à chaque joueur
print("Nombre de vies restantes à chaque joueur")
print(j1.pseudo + " : " + str(j1.vies) + " restantes")
print(j2.pseudo + " : " + str(j2.vies) + " restantes")
# Résultat de la partie
print("Résultat de la partie")
if j1.vies < j2.vies:
print(j1.pseudo + "a gagné la partie")
elif j1.vies == j2.vies:
print("match nul")
else: print(j2.pseudo + " a gagné la partie") | ? -- joueur 1 : 10 vies restantes. Nombre choisi : 50
3 50 trop haut
trop haut -- joueur 1 : 9 vies restantes. Nombre choisi : 25
3 25 trop haut
trop haut -- joueur 1 : 8 vies restantes. Nombre choisi : 10
3 10 trop haut
trop haut -- joueur 1 : 7 vies restantes. Nombre choisi : 5
3 5 trop bas
trop bas -- joueur 1 : 6 vies restantes. Nombre choisi : 7
3 7 trop haut
3 7 bravo !
? -- joueur 2 : 10 vies restantes. Nombre choisi : 50
3 50 trop haut
trop haut -- joueur 2 : 9 vies restantes. Nombre choisi : 25
3 25 trop bas
trop bas -- joueur 2 : 8 vies restantes. Nombre choisi : 30
3 30 trop haut
3 30 bravo !
Nombre de vies restantes à chaque joueur
joueur 1 : 6 restantes
joueur 2 : 8 restantes
Résultat de la partie
joueur 1a gagné la partie
| MIT | _doc/notebooks/td1a/pp_exo_deviner_un_nombre_correction.ipynb | Jerome-maker/ensae_teaching_cs |
Run sims to understand sensitivity of 'contact_tracing_constant' | ctc_range = [0.1 * x for x in range(11)]
dfs_ctc_pre_reopen = run_sensitivity_sims(pre_reopen_params, param_to_vary='contact_tracing_constant',
param_values = ctc_range, trajectories_per_config=250, time_horizon=100)
dfs_ctc_post_reopen = run_sensitivity_sims(reopen_params, param_to_vary='contact_tracing_constant',
param_values = ctc_range, trajectories_per_config=250, time_horizon=100)
import matplotlib.pyplot as plt
def plot_many_dfs_threshold(dfs_dict, threshold=0.1, xlabel="", title="", figsize=(10,6)):
plt.figure(figsize=figsize)
for df_label, dfs_varied in dfs_dict.items():
p_thresholds = []
xs = sorted(list(dfs_varied.keys()))
for x in xs:
cips = extract_cips(dfs_varied[x])
cip_exceed_thresh = [cip for cip in cips if cip >= threshold]
p_thresholds.append(len(cip_exceed_thresh) / len(cips) * 100)
plt.plot([x * 100 for x in xs], p_thresholds, marker='o', label=df_label)
plt.xlabel(xlabel)
plt.ylabel("Probability at least {:.0f}% infected (%)".format(threshold * 100))
plt.title(title)
plt.legend(loc='best')
plt.show()
title = """Outbreak Likelihood vs. Contact Tracing Effectiveness"""
plot_many_dfs_threshold({'Post-Reopen (Population-size 2500, Contacts/person/day 10)': dfs_ctc_post_reopen,
'Pre-Reopen (Population-size 1500, Contacts/person/day 7)': dfs_ctc_pre_reopen,
},
xlabel="Percentage of contacts recalled in contact tracing (%)",
title=title) | _____no_output_____ | MIT | boqn/group_testing/notebooks/.ipynb_checkpoints/cornell_reopen_analysis_sensitivity_contact_tracing_effectiveness-checkpoint.ipynb | RaulAstudillo06/BOQN |
Artificial Intelligence Nanodegree Machine Translation ProjectIn this notebook, sections that end with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully! IntroductionIn this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.- **Preprocess** - You'll convert text to sequence of integers.- **Models** Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!- **Prediction** Run the model on English text. | %load_ext autoreload
%aimport helper, tests
%autoreload 1
import collections
import helper
import numpy as np
import project_tests as tests
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.layers import GRU, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy | Using TensorFlow backend.
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Verify access to the GPUThe following test applies only if you expect to be using a GPU, e.g., while running in a Udacity Workspace or using an AWS instance with GPU support. Run the next cell, and verify that the device_type is "GPU".- If the device is not GPU & you are running from a Udacity Workspace, then save your workspace with the icon at the top, then click "enable" at the bottom of the workspace.- If the device is not GPU & you are running from an AWS instance, then refer to the cloud computing instructions in the classroom to verify your setup steps. | from tensorflow.python.client import device_lib
print(device_lib.list_local_devices()) | [name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 15440441665772238793
, name: "/gpu:0"
device_type: "GPU"
memory_limit: 357433344
locality {
bus_id: 1
}
incarnation: 3061658136091710780
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0"
]
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
DatasetWe begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from [WMT](http://www.statmt.org/). However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset. Load DataThe data is located in `data/small_vocab_en` and `data/small_vocab_fr`. The `small_vocab_en` file contains English sentences with their French translations in the `small_vocab_fr` file. Load the English and French data from these files from running the cell below. | # Load English data
english_sentences = helper.load_data('data/small_vocab_en')
# Load French data
french_sentences = helper.load_data('data/small_vocab_fr')
print('Dataset Loaded') | Dataset Loaded
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
FilesEach line in `small_vocab_en` contains an English sentence with the respective translation in each line of `small_vocab_fr`. View the first two lines from each file. | for sample_i in range(2):
print('small_vocab_en Line {}: {}'.format(sample_i + 1, english_sentences[sample_i]))
print('small_vocab_fr Line {}: {}'.format(sample_i + 1, french_sentences[sample_i])) | small_vocab_en Line 1: new jersey is sometimes quiet during autumn , and it is snowy in april .
small_vocab_fr Line 1: new jersey est parfois calme pendant l' automne , et il est neigeux en avril .
small_vocab_en Line 2: the united states is usually chilly during july , and it is usually freezing in november .
small_vocab_fr Line 2: les états-unis est généralement froid en juillet , et il gèle habituellement en novembre .
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing. VocabularyThe complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with. | english_words_counter = collections.Counter([word for sentence in english_sentences for word in sentence.split()])
french_words_counter = collections.Counter([word for sentence in french_sentences for word in sentence.split()])
print('{} English words.'.format(len([word for sentence in english_sentences for word in sentence.split()])))
print('{} unique English words.'.format(len(english_words_counter)))
print('10 Most common words in the English dataset:')
print('"' + '" "'.join(list(zip(*english_words_counter.most_common(10)))[0]) + '"')
print()
print('{} French words.'.format(len([word for sentence in french_sentences for word in sentence.split()])))
print('{} unique French words.'.format(len(french_words_counter)))
print('10 Most common words in the French dataset:')
print('"' + '" "'.join(list(zip(*french_words_counter.most_common(10)))[0]) + '"') | 1823250 English words.
227 unique English words.
10 Most common words in the English dataset:
"is" "," "." "in" "it" "during" "the" "but" "and" "sometimes"
1961295 French words.
355 unique French words.
10 Most common words in the French dataset:
"est" "." "," "en" "il" "les" "mais" "et" "la" "parfois"
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
For comparison, _Alice's Adventures in Wonderland_ contains 2,766 unique words of a total of 15,500 words. PreprocessFor this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods:1. Tokenize the words into ids2. Add padding to make all the sequences the same length.Time to start preprocessing the data... Tokenize (IMPLEMENTATION)For a neural network to predict on text data, it first has to be turned into data it can understand. Text data like "dog" is a sequence of ASCII character encodings. Since a neural network is a series of multiplication and addition operations, the input data needs to be number(s).We can turn each character into a number or each word into a number. These are called character and word ids, respectively. Character ids are used for character level models that generate text predictions for each character. A word level model uses word ids that generate text predictions for each word. Word level models tend to learn better, since they are lower in complexity, so we'll use those.Turn each sentence into a sequence of words ids using Keras's [`Tokenizer`](https://keras.io/preprocessing/text/tokenizer) function. Use this function to tokenize `english_sentences` and `french_sentences` in the cell below.Running the cell will run `tokenize` on sample data and show output for debugging. | def tokenize(x):
"""
Tokenize x
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
"""
# TODO: Implement
tokenizer = Tokenizer()
tokenizer.fit_on_texts(x)
return tokenizer.texts_to_sequences(x), tokenizer
tests.test_tokenize(tokenize)
# Tokenize Example output
text_sentences = [
'The quick brown fox jumps over the lazy dog .',
'By Jove , my quick study of lexicography won a prize .',
'This is a short sentence .']
text_tokenized, text_tokenizer = tokenize(text_sentences)
print(text_tokenizer.word_index)
print()
for sample_i, (sent, token_sent) in enumerate(zip(text_sentences, text_tokenized)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(sent))
print(' Output: {}'.format(token_sent)) | {'the': 1, 'quick': 2, 'a': 3, 'brown': 4, 'fox': 5, 'jumps': 6, 'over': 7, 'lazy': 8, 'dog': 9, 'by': 10, 'jove': 11, 'my': 12, 'study': 13, 'of': 14, 'lexicography': 15, 'won': 16, 'prize': 17, 'this': 18, 'is': 19, 'short': 20, 'sentence': 21}
Sequence 1 in x
Input: The quick brown fox jumps over the lazy dog .
Output: [1, 2, 4, 5, 6, 7, 1, 8, 9]
Sequence 2 in x
Input: By Jove , my quick study of lexicography won a prize .
Output: [10, 11, 12, 2, 13, 14, 15, 16, 3, 17]
Sequence 3 in x
Input: This is a short sentence .
Output: [18, 19, 3, 20, 21]
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Padding (IMPLEMENTATION)When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the **end** of each sequence using Keras's [`pad_sequences`](https://keras.io/preprocessing/sequence/pad_sequences) function. | def pad(x, length=None):
"""
Pad x
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, use length of longest sequence in x.
:return: Padded numpy array of sequences
"""
# TODO: Implement
if length is None:
length = max([len(sentence) for sentence in x])
return pad_sequences(x, maxlen=length, padding='post')
tests.test_pad(pad)
# Pad Tokenized output
test_pad = pad(text_tokenized)
for sample_i, (token_sent, pad_sent) in enumerate(zip(text_tokenized, test_pad)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(np.array(token_sent)))
print(' Output: {}'.format(pad_sent)) | Sequence 1 in x
Input: [1 2 4 5 6 7 1 8 9]
Output: [1 2 4 5 6 7 1 8 9 0]
Sequence 2 in x
Input: [10 11 12 2 13 14 15 16 3 17]
Output: [10 11 12 2 13 14 15 16 3 17]
Sequence 3 in x
Input: [18 19 3 20 21]
Output: [18 19 3 20 21 0 0 0 0 0]
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Preprocess PipelineYour focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the `preprocess` function. | def preprocess(x, y):
"""
Preprocess x and y
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
"""
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x)
preprocess_y = pad(preprocess_y)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dimensions
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer =\
preprocess(english_sentences, french_sentences)
max_english_sequence_length = preproc_english_sentences.shape[1]
max_french_sequence_length = preproc_french_sentences.shape[1]
english_vocab_size = len(english_tokenizer.word_index)
french_vocab_size = len(french_tokenizer.word_index)
print('Data Preprocessed')
print("Max English sentence length:", max_english_sequence_length)
print("Max French sentence length:", max_french_sequence_length)
print("English vocabulary size:", english_vocab_size)
print("French vocabulary size:", french_vocab_size) | Data Preprocessed
Max English sentence length: 15
Max French sentence length: 21
English vocabulary size: 199
French vocabulary size: 344
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
ModelsIn this section, you will experiment with various neural network architectures.You will begin by training four relatively simple architectures.- Model 1 is a simple RNN- Model 2 is a RNN with Embedding- Model 3 is a Bidirectional RNN- Model 4 is an optional Encoder-Decoder RNNAfter experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models. Ids Back to TextThe neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function `logits_to_text` will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network. | def logits_to_text(logits, tokenizer):
"""
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
"""
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>'
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
print('`logits_to_text` function loaded.') | `logits_to_text` function loaded.
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Model 1: RNN (IMPLEMENTATION)A basic RNN model is a good baseline for sequence data. In this model, you'll build a RNN that translates English to French. | def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Build the layers
inputs = Input(input_shape[1:])
outputs = GRU(256, return_sequences=True)(inputs)
outputs = TimeDistributed(Dense(french_vocab_size, activation="softmax"))(outputs)
model = Model(inputs, outputs)
learning_rate = 0.001
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_simple_model(simple_model)
# Reshaping the input to work with a basic RNN
tmp_x = pad(preproc_english_sentences, max_french_sequence_length)
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
simple_rnn_model = simple_model(
tmp_x.shape,
max_french_sequence_length,
english_vocab_size,
french_vocab_size)
simple_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer)) | Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 12s 109us/step - loss: 2.5907 - acc: 0.4812 - val_loss: nan - val_acc: 0.5582
Epoch 2/10
110288/110288 [==============================] - 10s 91us/step - loss: 1.6567 - acc: 0.5862 - val_loss: nan - val_acc: 0.6038
Epoch 3/10
110288/110288 [==============================] - 10s 91us/step - loss: 1.4347 - acc: 0.6124 - val_loss: nan - val_acc: 0.6220
Epoch 4/10
110288/110288 [==============================] - 10s 91us/step - loss: 1.3146 - acc: 0.6324 - val_loss: nan - val_acc: 0.6414
Epoch 5/10
110288/110288 [==============================] - 10s 91us/step - loss: 1.2245 - acc: 0.6483 - val_loss: nan - val_acc: 0.6516
Epoch 6/10
110288/110288 [==============================] - 10s 92us/step - loss: 1.1588 - acc: 0.6597 - val_loss: nan - val_acc: 0.6692
Epoch 7/10
110288/110288 [==============================] - 10s 92us/step - loss: 1.1090 - acc: 0.6685 - val_loss: nan - val_acc: 0.6745
Epoch 8/10
110288/110288 [==============================] - 10s 91us/step - loss: 1.0710 - acc: 0.6729 - val_loss: nan - val_acc: 0.6732
Epoch 9/10
110288/110288 [==============================] - 10s 91us/step - loss: 1.0381 - acc: 0.6766 - val_loss: nan - val_acc: 0.6822
Epoch 10/10
110288/110288 [==============================] - 10s 91us/step - loss: 1.0106 - acc: 0.6810 - val_loss: nan - val_acc: 0.6816
new jersey est parfois calme en mois et il est il est en en <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Model 2: Embedding (IMPLEMENTATION)You've turned the words into ids, but there's a better representation of a word. This is called word embeddings. An embedding is a vector representation of the word that is close to similar words in n-dimensional space, where the n represents the size of the embedding vectors.In this model, you'll create a RNN model using embedding. | def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a RNN model using word embedding on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
inputs = Input(input_shape[1:])
outputs = Embedding(english_vocab_size, output_sequence_length)(inputs)
outputs = GRU(256, return_sequences=True)(outputs)
outputs = TimeDistributed(Dense(french_vocab_size, activation="softmax"))(outputs)
model = Model(inputs, outputs)
learning_rate = 0.001
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_embed_model(embed_model)
# TODO: Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
# TODO: Train the neural network
embed_rnn_model = embed_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
english_vocab_size,
french_vocab_size)
embed_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# TODO: Print prediction(s)
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer)) | Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 12s 104us/step - loss: 3.3620 - acc: 0.4088 - val_loss: nan - val_acc: 0.4603
Epoch 2/10
110288/110288 [==============================] - 11s 102us/step - loss: 2.5504 - acc: 0.4842 - val_loss: nan - val_acc: 0.5128
Epoch 3/10
110288/110288 [==============================] - 11s 102us/step - loss: 2.0617 - acc: 0.5411 - val_loss: nan - val_acc: 0.5635
Epoch 4/10
110288/110288 [==============================] - 11s 101us/step - loss: 1.6021 - acc: 0.6006 - val_loss: nan - val_acc: 0.6337
Epoch 5/10
110288/110288 [==============================] - 11s 102us/step - loss: 1.3343 - acc: 0.6570 - val_loss: nan - val_acc: 0.6817
Epoch 6/10
110288/110288 [==============================] - 11s 102us/step - loss: 1.1285 - acc: 0.7028 - val_loss: nan - val_acc: 0.7282
Epoch 7/10
110288/110288 [==============================] - 11s 102us/step - loss: 0.9586 - acc: 0.7497 - val_loss: nan - val_acc: 0.7712
Epoch 8/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.8094 - acc: 0.7879 - val_loss: nan - val_acc: 0.8058
Epoch 9/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.6796 - acc: 0.8175 - val_loss: nan - val_acc: 0.8296
Epoch 10/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.5832 - acc: 0.8390 - val_loss: nan - val_acc: 0.8477
new jersey est parfois calme en l' et il est il est en <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Model 3: Bidirectional RNNs (IMPLEMENTATION)One restriction of a RNN is that it can't see the future input, only the past. This is where bidirectional recurrent neural networks come in. They are able to see the future data. | def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a bidirectional RNN model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
inputs = Input(input_shape[1:])
outputs = Bidirectional(GRU(256, return_sequences=True))(inputs)
outputs = TimeDistributed(Dense(french_vocab_size, activation="softmax"))(outputs)
model = Model(inputs, outputs)
learning_rate = 0.001
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_bd_model(bd_model)
# TODO: Train and Print prediction(s)
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
bd_rnn_model = embed_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
english_vocab_size,
french_vocab_size)
bd_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
print(logits_to_text(bd_rnn_model.predict(tmp_x[:1])[0], french_tokenizer)) | Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 12s 106us/step - loss: 3.3402 - acc: 0.4192 - val_loss: nan - val_acc: 0.4769
Epoch 2/10
110288/110288 [==============================] - 11s 101us/step - loss: 2.4795 - acc: 0.4908 - val_loss: nan - val_acc: 0.5142
Epoch 3/10
110288/110288 [==============================] - 11s 101us/step - loss: 1.9030 - acc: 0.5528 - val_loss: nan - val_acc: 0.5912
Epoch 4/10
110288/110288 [==============================] - 11s 101us/step - loss: 1.4347 - acc: 0.6258 - val_loss: nan - val_acc: 0.6628
Epoch 5/10
110288/110288 [==============================] - 11s 102us/step - loss: 1.1867 - acc: 0.6922 - val_loss: nan - val_acc: 0.7193
Epoch 6/10
110288/110288 [==============================] - 11s 102us/step - loss: 0.9938 - acc: 0.7419 - val_loss: nan - val_acc: 0.7685
Epoch 7/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.8202 - acc: 0.7881 - val_loss: nan - val_acc: 0.8065
Epoch 8/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.6827 - acc: 0.8198 - val_loss: nan - val_acc: 0.8325
Epoch 9/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.5852 - acc: 0.8410 - val_loss: nan - val_acc: 0.8509
Epoch 10/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.5119 - acc: 0.8583 - val_loss: nan - val_acc: 0.8654
new jersey est parfois calme en l' et et il et il avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Model 4: Encoder-Decoder (OPTIONAL)Time to look at encoder-decoder models. This model is made up of an encoder and decoder. The encoder creates a matrix representation of the sentence. The decoder takes this matrix as input and predicts the translation as output.Create an encoder-decoder model in the cell below. | def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train an encoder-decoder model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# OPTIONAL: Implement
inputs = Input(input_shape[1:])
encoded = GRU(256, return_sequences=False)(inputs)
decoded = RepeatVector(output_sequence_length)(encoded)
outputs = GRU(256, return_sequences=True)(decoded)
outputs = Dense(french_vocab_size, activation="softmax")(outputs)
model = Model(inputs, outputs)
learning_rate = 0.001
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_encdec_model(encdec_model)
# OPTIONAL: Train and Print prediction(s)
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
encdec_rnn_model = embed_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
english_vocab_size,
french_vocab_size)
encdec_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
print(logits_to_text(encdec_rnn_model.predict(tmp_x[:1])[0], french_tokenizer)) | Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 12s 106us/step - loss: 3.3761 - acc: 0.4076 - val_loss: nan - val_acc: 0.4515
Epoch 2/10
110288/110288 [==============================] - 11s 100us/step - loss: 2.5353 - acc: 0.4798 - val_loss: nan - val_acc: 0.5145
Epoch 3/10
110288/110288 [==============================] - 11s 101us/step - loss: 2.0491 - acc: 0.5406 - val_loss: nan - val_acc: 0.5683
Epoch 4/10
110288/110288 [==============================] - 11s 101us/step - loss: 1.5999 - acc: 0.5997 - val_loss: nan - val_acc: 0.6365
Epoch 5/10
110288/110288 [==============================] - 11s 101us/step - loss: 1.3258 - acc: 0.6664 - val_loss: nan - val_acc: 0.6911
Epoch 6/10
110288/110288 [==============================] - 11s 101us/step - loss: 1.1065 - acc: 0.7156 - val_loss: nan - val_acc: 0.7447
Epoch 7/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.9247 - acc: 0.7625 - val_loss: nan - val_acc: 0.7824
Epoch 8/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.7729 - acc: 0.7968 - val_loss: nan - val_acc: 0.8140
Epoch 9/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.6530 - acc: 0.8240 - val_loss: nan - val_acc: 0.8355
Epoch 10/10
110288/110288 [==============================] - 11s 101us/step - loss: 0.5646 - acc: 0.8437 - val_loss: nan - val_acc: 0.8511
new jersey est parfois calme en l' et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Model 5: Custom (IMPLEMENTATION)Use everything you learned from the previous models to create a model that incorporates embedding and a bidirectional rnn into one model. | def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
learning_rate = 0.001
inputs = Input(shape=input_shape[1:])
encoded = Embedding(english_vocab_size, 300)(inputs)
encoded = Bidirectional(GRU(512, dropout=0.2))(encoded)
encoded = Dense(512, activation='relu')(encoded)
decoded = RepeatVector(output_sequence_length)(encoded)
decoded = Bidirectional(GRU(512, dropout=0.2, return_sequences=True))(decoded)
decoded = TimeDistributed(Dense(french_vocab_size))(decoded)
predictions = Activation('softmax')(decoded)
model = Model(inputs=inputs, outputs=predictions)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_model_final(model_final)
print('Final Model Loaded')
# TODO: Train the final model
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
final_model = model_final(
tmp_x.shape,
preproc_french_sentences.shape[1],
english_vocab_size,
french_vocab_size)
final_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
print(logits_to_text(encdec_rnn_model.predict(tmp_x[:1])[0], french_tokenizer)) | Final Model Loaded
Train on 110288 samples, validate on 27573 samples
Epoch 1/10
110288/110288 [==============================] - 91s 824us/step - loss: 2.4410 - acc: 0.4865 - val_loss: nan - val_acc: 0.5841
Epoch 2/10
110288/110288 [==============================] - 89s 807us/step - loss: 1.4055 - acc: 0.6253 - val_loss: nan - val_acc: 0.6683
Epoch 3/10
110288/110288 [==============================] - 89s 806us/step - loss: 1.0897 - acc: 0.6953 - val_loss: nan - val_acc: 0.7250
Epoch 4/10
110288/110288 [==============================] - 89s 806us/step - loss: 0.9146 - acc: 0.7333 - val_loss: nan - val_acc: 0.7512
Epoch 5/10
110288/110288 [==============================] - 89s 806us/step - loss: 0.7888 - acc: 0.7662 - val_loss: nan - val_acc: 0.7899
Epoch 6/10
110288/110288 [==============================] - 89s 806us/step - loss: 0.6854 - acc: 0.7937 - val_loss: nan - val_acc: 0.8163
Epoch 7/10
110288/110288 [==============================] - 89s 806us/step - loss: 0.5770 - acc: 0.8241 - val_loss: nan - val_acc: 0.8468
Epoch 8/10
110288/110288 [==============================] - 89s 805us/step - loss: 0.4818 - acc: 0.8538 - val_loss: nan - val_acc: 0.8753
Epoch 9/10
110288/110288 [==============================] - 89s 805us/step - loss: 0.3899 - acc: 0.8836 - val_loss: nan - val_acc: 0.9093
Epoch 10/10
110288/110288 [==============================] - 89s 805us/step - loss: 0.3099 - acc: 0.9085 - val_loss: nan - val_acc: 0.9293
new jersey est parfois calme en l' et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Prediction (IMPLEMENTATION) | def final_predictions(x, y, x_tk, y_tk):
"""
Gets predictions using the final model
:param x: Preprocessed English data
:param y: Preprocessed French data
:param x_tk: English tokenizer
:param y_tk: French tokenizer
"""
# TODO: Train neural network using model_final
model = model_final(x.shape,
y.shape[1],
len(x_tk.word_index),
len(y_tk.word_index))
model.fit(x, y, batch_size=1024, epochs=12, validation_split=0.2)
## DON'T EDIT ANYTHING BELOW THIS LINE
y_id_to_word = {value: key for key, value in y_tk.word_index.items()}
y_id_to_word[0] = '<PAD>'
sentence = 'he saw a old yellow truck'
sentence = [x_tk.word_index[word] for word in sentence.split()]
sentence = pad_sequences([sentence], maxlen=x.shape[-1], padding='post')
sentences = np.array([sentence[0], x[0]])
predictions = model.predict(sentences, len(sentences))
print('Sample 1:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[0]]))
print('Il a vu un vieux camion jaune')
print('Sample 2:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[1]]))
print(' '.join([y_id_to_word[np.max(x)] for x in y[0]]))
final_predictions(preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer) | Train on 110288 samples, validate on 27573 samples
Epoch 1/12
110288/110288 [==============================] - 81s 733us/step - loss: 2.3220 - acc: 0.5011 - val_loss: nan - val_acc: 0.5999
Epoch 2/12
110288/110288 [==============================] - 79s 713us/step - loss: 1.3477 - acc: 0.6375 - val_loss: nan - val_acc: 0.6822
Epoch 3/12
110288/110288 [==============================] - 79s 713us/step - loss: 1.0208 - acc: 0.7100 - val_loss: nan - val_acc: 0.7388
Epoch 4/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.8375 - acc: 0.7518 - val_loss: nan - val_acc: 0.7783
Epoch 5/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.7114 - acc: 0.7849 - val_loss: nan - val_acc: 0.8114
Epoch 6/12
110288/110288 [==============================] - 79s 714us/step - loss: 0.5968 - acc: 0.8179 - val_loss: nan - val_acc: 0.8416
Epoch 7/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.4868 - acc: 0.8513 - val_loss: nan - val_acc: 0.8810
Epoch 8/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.3818 - acc: 0.8846 - val_loss: nan - val_acc: 0.9105
Epoch 9/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.2995 - acc: 0.9113 - val_loss: nan - val_acc: 0.9290
Epoch 10/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.2387 - acc: 0.9299 - val_loss: nan - val_acc: 0.9451
Epoch 11/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.1893 - acc: 0.9444 - val_loss: nan - val_acc: 0.9505
Epoch 12/12
110288/110288 [==============================] - 79s 713us/step - loss: 0.1621 - acc: 0.9521 - val_loss: nan - val_acc: 0.9566
Sample 1:
il a vu un vieux camion jaune <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
Il a vu un vieux camion jaune
Sample 2:
new jersey est parfois calme pendant l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
new jersey est parfois calme pendant l' automne et il est neigeux en avril <PAD> <PAD> <PAD> <PAD> <PAD> <PAD> <PAD>
| MIT | machine_translation.ipynb | jay-thakur/machine_translation |
SubmissionWhen you're ready to submit, complete the following steps:1. Review the [rubric](https://review.udacity.com/!/rubrics/1004/view) to ensure your submission meets all requirements to pass2. Generate an HTML version of this notebook - Run the next cell to attempt automatic generation (this is the recommended method in Workspaces) - Navigate to **FILE -> Download as -> HTML (.html)** - Manually generate a copy using `nbconvert` from your shell terminal```$ pip install nbconvert$ python -m nbconvert machine_translation.ipynb``` 3. Submit the project - If you are in a Workspace, simply click the "Submit Project" button (bottom towards the right) - Otherwise, add the following files into a zip archive and submit them - `helper.py` - `machine_translation.ipynb` - `machine_translation.html` - You can export the notebook by navigating to **File -> Download as -> HTML (.html)**. Generate the html**Save your notebook before running the next cell to generate the HTML output.** Then submit your project. | # Save before you run this cell!
!!jupyter nbconvert *.ipynb | _____no_output_____ | MIT | machine_translation.ipynb | jay-thakur/machine_translation |
Unit 5 - Financial Planning | # Initial imports
import os
import requests
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
# date here
from datetime import date
%matplotlib inline
# Load .env enviroment variables
load_dotenv() | _____no_output_____ | Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Part 1 - Personal Finance Planner Collect Crypto Prices Using the `requests` Library | # Set current amount of crypto assets
# YOUR CODE HERE!
my_btc = 1.2
my_eth = 5.3
# Crypto API URLs
btc_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=CAD"
eth_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=CAD"
# response_data = requests.get(create_deck_url).json()
# response_data
btc_resp = requests.get(btc_url).json()
btc_price = btc_resp['data']['1']['quotes']['USD']['price']
my_btc_value =my_btc * btc_price
eth_resp = requests.get(eth_url).json()
eth_price = eth_resp['data']['1027']['quotes']['USD']['price']
my_eth_value = my_eth * eth_price
print(f"The current value of your {my_btc} BTC is ${my_btc_value:0.2f}")
print(f"The current value of your {my_eth} ETH is ${my_eth_value:0.2f}") | The current value of your 1.2 BTC is $60097.20
The current value of your 5.3 ETH is $11897.07
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Collect Investments Data Using Alpaca: `SPY` (stocks) and `AGG` (bonds) | # Current amount of shares
# Create two variables named my_agg and my_spy and set them equal to 200 and 50, respectively.
# YOUR CODE HERE!
my_agg = 200
my_spy = 50
# Set Alpaca API key and secret
# YOUR CODE HERE!
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Create the Alpaca API object
# YOUR CODE HERE!
api = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version = "v2"
)
# Format current date as ISO format
# YOUR CODE HERE!
# use "2021-04-16" so weekend gives actual data
start_date = pd.Timestamp("2021-04-16", tz="America/New_York").isoformat()
today_date = pd.Timestamp(date.today(), tz="America/New_York").isoformat()
# Set the tickers
tickers = ["AGG", "SPY"]
# Set timeframe to '1D' for Alpaca API
timeframe = "1D"
# Get current closing prices for SPY and AGG
# YOUR CODE HERE!
ticker_data = api.get_barset(
tickers,
timeframe,
start=start_date,
end=start_date,
).df
# Preview DataFrame
# YOUR CODE HERE!
ticker_data
# Pick AGG and SPY close prices
# YOUR CODE HERE!
agg_close_price = ticker_data['AGG']['close'][0]
spy_close_price = ticker_data['SPY']['close'][0]
# Print AGG and SPY close prices
print(f"Current AGG closing price: ${agg_close_price}")
print(f"Current SPY closing price: ${spy_close_price}")
# Compute the current value of shares
# YOUR CODE HERE!
my_spy_value = spy_close_price * my_spy
my_agg_value = agg_close_price * my_agg
# Print current value of share
print(f"The current value of your {my_spy} SPY shares is ${my_spy_value:0.2f}")
print(f"The current value of your {my_agg} AGG shares is ${my_agg_value:0.2f}") | The current value of your 50 SPY shares is $20865.50
The current value of your 200 AGG shares is $22908.00
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Savings Health Analysis | # Set monthly household income
# YOUR CODE HERE!
monthly_income = 12000
# Create savings DataFrame
# YOUR CODE HERE!
df_savings = pd.DataFrame([my_btc_value+ my_eth_value, my_spy_value + my_agg_value], columns = ['amount'], index = ['crypto', 'shares'])
# Display savings DataFrame
display(df_savings)
# Plot savings pie chart
# YOUR CODE HERE!
df_savings['amount'].plot.pie(y= ['crypto', 'shares'])
# how to put label
# Set ideal emergency fund
emergency_fund = monthly_income * 3
# Calculate total amount of savings
# YOUR CODE HERE!
total_saving = df_savings['amount'].sum()
# Validate saving health
# YOUR CODE HERE!
# If total savings are greater than the emergency fund, display a message congratulating the person for having enough money in this fund.
# If total savings are equal to the emergency fund, display a message congratulating the person on reaching this financial goal.
# If total savings are less than the emergency fund, display a message showing how many dollars away the person is from reaching the goal.
if total_saving > emergency_fund:
print ('Congratulations! You have enough money in your emergency fund.')
elif total_saving == emergency_fund:
print ('Contratulations you reached your financial goals ')
else:
print ('You are making great progress. You need to save $ {round((emergency_fund - total_saving), 2)}') | Congratulations! You have enough money in your emergency fund.
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Part 2 - Retirement Planning Monte Carlo Simulation Hassan's Note for some reason AlPaca would not let me get more than 1000 records. To get 5 years data (252 * 5), I had to break it up into two reads and concat the data. | # Set start and end dates of five years back from today.
# Sample results may vary from the solution based on the time frame chosen
start_date1 = pd.Timestamp('2015-08-07', tz='America/New_York').isoformat()
end_date1 = pd.Timestamp('2017-08-07', tz='America/New_York').isoformat()
start_date2 = pd.Timestamp('2017-08-08', tz='America/New_York').isoformat()
end_date2 = pd.Timestamp('2020-08-07', tz='America/New_York').isoformat()
# end_date1 = pd.Timestamp('2019-08-07', tz='America/New_York').isoformat()
# hits 1000 item limit, have to do in two batches and pringt
# Get 5 years' worth of historical data for SPY and AGG
# YOUR CODE HERE!
# Display sample data
df_stock_data.head()
# Get 5 years' worth of historical data for SPY and AGG
# create two dataframes and concaternate them
# first period data frame
df_stock_data1 = api.get_barset(
tickers,
timeframe,
start = start_date1,
end = end_date1,
limit = 1000
).df
# second period dataframe
df_stock_data2 = api.get_barset(
tickers,
timeframe,
start = start_date2,
end = end_date2,
limit = 1000
).df
df_stock_data = pd.concat ([df_stock_data1, df_stock_data2], axis = 0, join = 'inner')
print (f'stock data head: ')
print (df_stock_data.head(5))
print (f'\nstock data 1 tail: ')
print (df_stock_data.tail(5))
# print (f'stock data 1 head: {start_date1}')
# print (df_stock_data1.head(5))
# print (f'\nstock data 1 tail: {end_date1}')
# print (df_stock_data1.tail(5))
# print (f'\nstock data 2 head: {start_date2}')
# print (df_stock_data2.head(5))
# print (f'\nstock data 2 tail: {end_date2}')
# print (df_stock_data2.tail(5))
# delete
# # Get 5 years' worth of historical data for SPY and AGG
# # YOUR CODE HERE!
# df_stock_data = api.get_barset(
# tickers,
# timeframe,
# start = start_date,
# end = end_date,
# limit = 1000
# ).df
# # Display sample data
# # to fix.
# df_stock_data.head()
# Configuring a Monte Carlo simulation to forecast 30 years cumulative returns
# YOUR CODE HERE!
MC_even_dist = MCSimulation(
portfolio_data = df_stock_data,
weights = [.4, .6],
num_simulation = 500,
num_trading_days = 252*30
)
# Printing the simulation input data
# YOUR CODE HERE!
# Printing the simulation input data
# YOUR CODE HERE!
MC_even_dist.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 30 years cumulative returns
# YOUR CODE HERE!
# Running a Monte Carlo simulation to forecast 30 years cumulative returns
# YOUR CODE HERE!
MC_even_dist.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
# Plot simulation outcomes
line_plot = MC_even_dist.plot_simulation()
# delete
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot = MC_even_dist.plot_distribution() | _____no_output_____ | Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Retirement Analysis | # delete
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
# Print summary statistics
# YOUR CODE HERE!
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
tbl = MC_even_dist.summarize_cumulative_return()
# Print summary statistics
print (tbl) | count 500.000000
mean 9.683146
std 6.514698
min 1.320640
25% 5.124632
50% 8.047245
75% 12.513634
max 51.017155
95% CI Lower 2.421887
95% CI Upper 25.440141
Name: 7560, dtype: float64
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Calculate the expected portfolio return at the 95% lower and upper confidence intervals based on a `$20,000` initial investment. | # delete
# # Set initial investment
# initial_investment = 20000
# # Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $20,000
# # YOUR CODE HERE!
# # Print results
# print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
# f" over the next 30 years will end within in the range of"
# f" ${ci_lower} and ${ci_upper}")
# Set initial investment
initial_investment = 20000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $20,000
# YOUR CODE HERE!
ci_lower = round(tbl[8]*initial_investment,2)
ci_upper = round(tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}") | There is a 95% chance that an initial investment of $20000 in the portfolio over the next 30 years will end within in the range of $48437.75 and $508802.81
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `50%` increase in the initial investment. | # delete
# # Set initial investment
# initial_investment = 20000 * 1.5
# # Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000
# # YOUR CODE HERE!
# # Print results
# print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
# f" over the next 30 years will end within in the range of"
# f" ${ci_lower} and ${ci_upper}")
# Set initial investment
initial_investment = 20000 * 1.5
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000
# YOUR CODE HERE!
ci_lower = round(tbl[8]*initial_investment,2)
ci_upper = round(tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}") | There is a 95% chance that an initial investment of $30000.0 in the portfolio over the next 30 years will end within in the range of $72656.62 and $763204.22
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Optional Challenge - Early Retirement Five Years Retirement Option | # Configuring a Monte Carlo simulation to forecast 5 years cumulative returns
# YOUR CODE HERE!
MC_even_dist = MCSimulation(
portfolio_data = df_stock_data,
weights = [.4, .6],
num_simulation = 500,
num_trading_days = 252*5
)
# Running a Monte Carlo simulation to forecast 5 years cumulative returns
# YOUR CODE HERE!
# Running a Monte Carlo simulation to forecast 5 years cumulative returns
MC_even_dist.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
# Plot simulation outcomes
line_plot = MC_even_dist.plot_simulation()
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot = MC_even_dist.plot_distribution()
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
# Print summary statistics
# YOUR CODE HERE!
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
tbl = MC_even_dist.summarize_cumulative_return()
# Print summary statistics
print (tbl)
# Set initial investment
# YOUR CODE HERE!
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${ci_lower_five} and ${ci_upper_five}")
# Set initial investment
# YOUR CODE HERE!
initial_investment = 60000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
ci_lower_five = round(tbl[8]*initial_investment,2)
ci_upper_five = round(tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${ci_lower_five} and ${ci_upper_five}") | There is a 95% chance that an initial investment of $60000 in the portfolio over the next 5 years will end within in the range of $51278.36 and $141959.07
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
Ten Years Retirement Option | # Configuring a Monte Carlo simulation to forecast 10 years cumulative returns
# YOUR CODE HERE!
MC_even_dist = MCSimulation(
portfolio_data = df_stock_data,
weights = [.4, .6],
num_simulation = 500,
num_trading_days = 252*10
)
# Running a Monte Carlo simulation to forecast 10 years cumulative returns
# YOUR CODE HERE!
MC_even_dist.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
# Plot simulation outcomes
line_plot = MC_even_dist.plot_simulation()
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
# Plot probability distribution and confidence intervals
dist_plot = MC_even_dist.plot_distribution()
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
# Print summary statistics
# YOUR CODE HERE!
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
tbl = MC_even_dist.summarize_cumulative_return()
# Print summary statistics
print (tbl)
# Set initial investment
# YOUR CODE HERE!
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten} and ${ci_upper_ten}")
# Set initial investment
# YOUR CODE HERE!
initial_investment = 60000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
ci_lower_ten = round(tbl[8]*initial_investment,2)
ci_upper_ten = round(tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten} and ${ci_upper_ten}") | There is a 95% chance that an initial investment of $60000 in the portfolio over the next 10 years will end within in the range of $55174.9 and $241011.77
| Apache-2.0 | Scratch Folder/financial-planner-v4.ipynb | HassanAlam55/ColumbiaFintechHW5API |
!pip install h5py pyyaml
import tensorflow as tf
from tensorflow import keras
import urllib.request
print('Beginning file download with urllib2...')
url = 'https://drive.google.com/file/d/1Z0dlwhtS03lYhHv-f3ZT7nf_PgbIXBMB/view?usp=sharing'
urllib.request.urlretrieve(url, 'basic_CNN.h5')
basic_CNN = keras.models.load_model('basic_CNN.h5')
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/gdrive/My Drive/Colab Notebooks/mBSUS
#basic_CNN = keras.models.load_model("basic_CNN.h5")
import numpy as np
from google.colab import files
from tensorflow.keras.preprocessing import image
os.chdir('/content/')
uploaded = files.upload()
for fn in uploaded.keys():
path = '/content/' + fn
img = image.load_img(path, target_size = (300,300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = basic_CNN.predict(images, batch_size = 10)
print(classes[0])
if classes[0] > 0.5:
print (fn + " is pneumonia: " + str(classes[0]))
else:
print(fn + " is not pneumonia: " + str(classes[0]))
| _____no_output_____ | Apache-2.0 | mBSUS_XRay_Test_Program.ipynb | rjthompson22/bMSUS_XRay_Model | |
Abstract Factory Design Pattern >An abstract factory is a generative design pattern that allows you to create families of related objects without getting attached to specific classes of created objects. The pattern is being implemented by creating an abstract class (for example - Factory), which is represented as an interface for creating system components. Then the classes that implement this interface are being written. https://py.checkio.org/blog/design-patterns-part-1/ | class AbstractFactory:
def create_chair(self):
raise NotImplementedError()
def create_sofa(self):
raise NotImplementedError()
def create_table(self):
raise NotImplementedError()
class Chair:
def __init__(self, name):
self._name = name
def __str__(self):
return self._name
class Sofa:
def __init__(self, name):
self._name = name
def __str__(self):
return self._name
class Table:
def __init__(self, name):
self._name = name
def __str__(self):
return self._name
class VictorianFactory(AbstractFactory):
def create_chair(self):
return Chair('victorian chair')
def create_sofa(self):
return Sofa('victorian sofa')
def create_table(self):
return Table('victorian table')
class ModernFactory(AbstractFactory):
def create_chair(self):
return Chair('modern chair')
def create_sofa(self):
return Sofa('modern sofa')
def create_table(self):
return Table('modern table')
class FuturisticFactory(AbstractFactory):
def create_chair(self):
return Chair('futuristic chair')
def create_sofa(self):
return Sofa('futuristic sofa')
def create_table(self):
return Table('futuristic table')
factory_1 = VictorianFactory()
factory_2 = ModernFactory()
factory_3 = FuturisticFactory()
print(factory_1.create_chair())
print(factory_1.create_sofa())
print(factory_1.create_table())
print(factory_2.create_chair())
print(factory_2.create_sofa())
print(factory_2.create_table())
print(factory_3.create_chair())
print(factory_3.create_sofa())
print(factory_3.create_table()) | victorian chair
victorian sofa
victorian table
modern chair
modern sofa
modern table
futuristic chair
futuristic sofa
futuristic table
| MIT | Other/Creational Design Pattern - Abstract Factory.ipynb | deepaksood619/Python-Competitive-Programming |
Example - https://py.checkio.org/mission/army-units/solve/ | class Army:
def train_swordsman(self, name):
return NotImplementedError()
def train_lancer(self, name):
return NotImplementedError()
def train_archer(self, name):
return NotImplementedError()
class Swordsman:
def __init__(self, soldier_type, name, army_type):
self.army_type = army_type + ' swordsman'
self.name = name
self.soldier_type = soldier_type
def introduce(self):
return '{} {}, {}'.format(self.soldier_type, self.name, self.army_type)
class Lancer:
def __init__(self, soldier_type, name, army_type):
self.army_type = army_type + ' lancer'
self.name = name
self.soldier_type = soldier_type
def introduce(self):
return '{} {}, {}'.format(self.soldier_type, self.name, self.army_type)
class Archer:
def __init__(self, soldier_type, name, army_type):
self.army_type = army_type + ' archer'
self.name = name
self.soldier_type = soldier_type
def introduce(self):
return '{} {}, {}'.format(self.soldier_type, self.name, self.army_type)
class AsianArmy(Army):
def __init__(self):
self.army_type = 'Asian'
def train_swordsman(self, name):
return Swordsman('Samurai', name, self.army_type)
def train_lancer(self, name):
return Lancer('Ronin', name, self.army_type)
def train_archer(self, name):
return Archer('Shinobi', name, self.army_type)
class EuropeanArmy(Army):
def __init__(self):
self.army_type = 'European'
def train_swordsman(self, name):
return Swordsman('Knight', name, self.army_type)
def train_lancer(self, name):
return Lancer('Raubritter', name, self.army_type)
def train_archer(self, name):
return Archer('Ranger', name, self.army_type)
if __name__ == '__main__':
#These "asserts" using only for self-checking and not necessary for auto-testing
my_army = EuropeanArmy()
enemy_army = AsianArmy()
soldier_1 = my_army.train_swordsman("Jaks")
soldier_2 = my_army.train_lancer("Harold")
soldier_3 = my_army.train_archer("Robin")
soldier_4 = enemy_army.train_swordsman("Kishimoto")
soldier_5 = enemy_army.train_lancer("Ayabusa")
soldier_6 = enemy_army.train_archer("Kirigae")
assert soldier_1.introduce() == "Knight Jaks, European swordsman"
assert soldier_2.introduce() == "Raubritter Harold, European lancer"
assert soldier_3.introduce() == "Ranger Robin, European archer"
assert soldier_4.introduce() == "Samurai Kishimoto, Asian swordsman"
assert soldier_5.introduce() == "Ronin Ayabusa, Asian lancer"
assert soldier_6.introduce() == "Shinobi Kirigae, Asian archer"
print("Coding complete? Let's try tests!")
class Army:
def train_swordsman(self, name):
return Swordsman(self, name)
def train_lancer(self, name):
return Lancer(self, name)
def train_archer(self, name):
return Archer(self, name)
def introduce(self, name, army_type):
return f'{self.title[army_type]} {name}, {self.region} {army_type}'
class Fighter:
def __init__(self, army, name):
self.army = army
self.name = name
def introduce(self):
return self.army.introduce(self.name, self.army_type)
class Swordsman(Fighter):
army_type = 'swordsman'
class Lancer(Fighter):
army_type = 'lancer'
class Archer(Fighter):
army_type = 'archer'
class AsianArmy(Army):
title = {'swordsman': 'Samurai', 'lancer': 'Ronin', 'archer': 'Shinobi'}
region = 'Asian'
class EuropeanArmy(Army):
title = {'swordsman': 'Knight', 'lancer': 'Raubritter', 'archer': 'Ranger'}
region = 'European'
if __name__ == '__main__':
#These "asserts" using only for self-checking and not necessary for auto-testing
my_army = EuropeanArmy()
enemy_army = AsianArmy()
soldier_1 = my_army.train_swordsman("Jaks")
soldier_2 = my_army.train_lancer("Harold")
soldier_3 = my_army.train_archer("Robin")
soldier_4 = enemy_army.train_swordsman("Kishimoto")
soldier_5 = enemy_army.train_lancer("Ayabusa")
soldier_6 = enemy_army.train_archer("Kirigae")
print(soldier_1.introduce())
print("Knight Jaks, European swordsman")
print(soldier_2.introduce())
print(soldier_3.introduce())
assert soldier_1.introduce() == "Knight Jaks, European swordsman"
assert soldier_2.introduce() == "Raubritter Harold, European lancer"
assert soldier_3.introduce() == "Ranger Robin, European archer"
assert soldier_4.introduce() == "Samurai Kishimoto, Asian swordsman"
assert soldier_5.introduce() == "Ronin Ayabusa, Asian lancer"
assert soldier_6.introduce() == "Shinobi Kirigae, Asian archer"
print("Coding complete? Let's try tests!") | Knight Jaks, European swordsman
Knight Jaks, European swordsman
Raubritter Harold, European lancer
Ranger Robin, European archer
Coding complete? Let's try tests!
| MIT | Other/Creational Design Pattern - Abstract Factory.ipynb | deepaksood619/Python-Competitive-Programming |
Cosine prediction Data preparationIn this notebook, we will use fost to predict a cosine curve. Let's import `pandas` and `numpy` first. | import pandas as pd
import numpy as np | _____no_output_____ | MIT | examples/1. Cosine prediction.ipynb | meng-zha/FOST |
Next, generate cosine data. | train_df = pd.DataFrame()
#we don't have actual timestamp in this case, use a default one
train_df['Date'] = [pd.datetime(year=2000,month=1,day=1)+pd.Timedelta(days=i) for i in range(2000)]
train_df['TARGET'] = [np.cos(i/2) for i in range(2000)]
#there is not a 'Node' concept in this dataset, thus all the Node are 0
train_df.loc[:, 'Node'] = 0
#show first 40 data
train_df.iloc[:40].set_index('Date')['TARGET'].plot() | _____no_output_____ | MIT | examples/1. Cosine prediction.ipynb | meng-zha/FOST |
FOST only supports file path as input, save the data to `train.csv` | train_df.to_csv('train.csv',index=False) | _____no_output_____ | MIT | examples/1. Cosine prediction.ipynb | meng-zha/FOST |
Load fost to predict future | import fostool
from fostool.pipeline import Pipeline
#predict future 10 steps
lookahead = 10
fost = Pipeline(lookahead=lookahead, train_path='train.csv')
#fit in one line
fost.fit()
#get predict result
res = fost.predict()
#plot prediction
fost.plot(res, lookback_size=lookahead) | _____no_output_____ | MIT | examples/1. Cosine prediction.ipynb | meng-zha/FOST |
Pytorch Basic | import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
from IPython.display import clear_output
torch.cuda.is_available() | _____no_output_____ | MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Device | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') | _____no_output_____ | MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Hyper Parameter | input_size = 784
hidden_size = 500
num_class = 10
epochs = 5
batch_size = 100
lr = 0.001 | _____no_output_____ | MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Load MNIST Dataset | train_dataset = torchvision.datasets.MNIST(root='../data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='../data',
train=False,
transform=transforms.ToTensor())
print('train dataset shape : ',train_dataset.data.shape)
print('test dataset shape : ',test_dataset.data.shape)
plt.imshow(train_dataset.data[0])
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False) | _____no_output_____ | MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Simple Model | class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_class):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size,hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_class)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
model = NeuralNet(input_size,hidden_size,num_class).to(device) | _____no_output_____ | MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Loss and Optimizer | criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr) | _____no_output_____ | MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Train | total_step = len(train_loader)
for epoch in range(epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.reshape(-1,28*28).to(device)
labels = labels.to(device)
outputs = model(images)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
clear_output()
print('EPOCH [{}/{}] STEP [{}/{}] Loss {: .4f})'
.format(epoch+1, epochs, i+1, total_step, loss.item())) | EPOCH [5/5] STEP [600/600] Loss 0.0343)
| MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Test | with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total)) | Accuracy of the network on the 10000 test images: 97.85 %
| MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
save | torch.save(model.state_dict(), 'model.ckpt') | _____no_output_____ | MIT | my_notebook/1_basic_forward.ipynb | jjeamin/pytorch-tutorial |
Importar librerías y series de datos | import time
start = time.time()
#importar datos y librerias
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
from scipy import signal
from sklearn.linear_model import LinearRegression
from statsmodels.tsa.seasonal import seasonal_decompose
from scipy.stats import boxcox
from scipy import special
#leer excel de datos y de dias especiales
general = pd.read_excel (r'C:\Users\Diana\PAP\Data\Data1.xlsx')
special_days= pd.read_excel (r'C:\Users\Diana\PAP\Data\Christmas.xlsx')
#convertir dias especiales a fechas en python
for column in special_days.columns:
special_days[column] = pd.to_datetime(special_days[column])
general = general.set_index('fecha') | _____no_output_____ | MIT | Efecto Arima en final.ipynb | ramirezdiana/Forecast-with-fourier |
Establecer las funciones a utilizar | def kronecker(data1:'Dataframe 1',data2:'Dataframe 2'):
x=0
data1_kron=data1[x:x+1]
data2_kron=data2[x:x+1]
Combinacion=np.kron(data1_kron,data2_kron)
Combinacion=pd.DataFrame(Combinacion)
for x in range(1,len(data1)):
data1_kron=data1[x:x+1]
data2_kron=data2[x:x+1]
kron=np.kron(data1_kron,data2_kron)
Kron=pd.DataFrame(kron)
Combinacion=Combinacion.append(Kron)
return Combinacion
def regresion_linear(X:'variables para regresion',y:'datos'):
global model
model.fit(X, y)
coefficients=model.coef_
return model.predict(X)
def comparacion(real,pred):
comparacion=pd.DataFrame(columns=['real','prediccion','error'])
comparacion.real=real
comparacion.prediccion=pred
comparacion.error=np.abs((comparacion.real.values-comparacion.prediccion)/comparacion.real)*100
return comparacion | _____no_output_____ | MIT | Efecto Arima en final.ipynb | ramirezdiana/Forecast-with-fourier |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.