markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
What happens if you specify an index that is too high?
beatles[7]
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
How can you know how long a list is?
len(beatles)
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Do remember that indexing starts at 0, so don't make the mistake of thinking that len(yourlist) will give you the last item of your list!
beatles[len(beatles)]
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
This will work!
beatles[len(beatles) -1]
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
If-statements Most of the times, what you want to do when you program is to check a value and execute some operation depending on whether the value matches some condition. That's where if statements help! In its easiest form, an If statement is syntactic construction that checks whether a condition is met; if it is some part of code is executed
bassist = "Paul McCartney" if bassist == "Paul McCartney": print("Paul played bass with the Beatles!")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Mind the indentation very much! This is the essential element in the syntax of the statement
bassist = "Bill Wyman" if bassist == "Paul McCartney": print("I'm part of the if statement...") print("Paul played bass in the Beatles!")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
What happens if the condition is not met? Nothing! The indented code is not executed, because the condition is not met, so lines 4 and 5 are simply skipped. But what happens if we de-indent line 5? Can you guess why this is what happes? Most of the time, we need to specify what happens if the conditions are not met
bassist = "" if bassist == "Paul McCartney": print("Paul played bass in the Beatles!") else: print("This guy did not play for the Beatles...")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
This is the flow: * the condition in line 3 is checked * is it met? * yes: then line 4 is executed * no: then line 6 is executed Or we can specify many different conditions...
bassist = "Bill" if bassist == "Paul McCartney": print("Paul played bass in the Beatles!") elif bassist == "Bill Wyman": print("Bill Wyman played for the Rolling Stones!") else: print("I don't know what band this guy played for...")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
For loops The greatest thing about lists is that thet are iterable, that is you can loop through them. What do we do if we want to apply some line of code to each element in a list? Try with a for loop! A for loop can be paraphrased as: "for each element named x in an iterable (e.g. a list): do some code (e.g. print the value of x)"
for b in beatles: print(b + " was one of the Beatles")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's break the code down to its parts: * b: an arbitrary name that we give to the variable holding every value in the loop (it could have been any name; b is just very convenient in this case!) * beatles: the list we're iterating through * : as in the if-statements: don't forget the colon! * indent: also, don't forget to indent this code! it's the only thing that is telling python that line 2 is part of the for loop! * line 2: the function that we want to execute for each item in the iterables Now, let's join if statements and for loop to do something nice...
beatles = ["John", "Paul", "George", "Ringo"] for b in beatles: if b == "Paul": instrument = "bass" elif b == "John": instrument = "rhythm guitar" elif b == "George": instrument = "lead guitar" elif b == "Ringo": instrument = "drum" print(b + " played " + instrument + " with the Beatles")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Input and Output One of the most frequent tasks that programmers do is reading data from files, and write some of the output of the programs to a file. In Python (as in many language), we need first to open a file-handler with the appropriate mode in order to process it. Files can be opened in: * read mode ("r") * write mode ("w") * append mode Let's try to read the content of one of the txt files of our Sunoikisis directory First, we open the file handler in read mode:
#see? we assign the file-handler to a variable, or we wouldn't be able #to do anything with that! f = open("NOTES.md", "r")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
note that "r" is optional: read is the default mode! Now there are a bunch of things we can do: * read the full content in one variable with this code: content = f.read() read the lines in a list of lines: lines = f.readlines() or, which is the easiest, simply read the content one line at the time with a for loop; the f object is iterable, so this is as easy as:
for l in f: print(l)
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Once you're done, don't forget to close the handle:
f.close() #all together f = open("NOTES.md") for l in f: print(l) f.close()
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Now, there's a shortcut statement, which you'll often see and is very convenient, because it takes care of opening, closing and cleaning up the mess, in case there's some error:
with open("NOTES.md") as f: #mind the indent! for l in f: #double indent, of course! print(l)
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Now, how about writing to a file? Let's try to write a simple message on a file; first, we open the handler in write mode
out = open("test.txt", "w") #the file is now open; let's write something in it out.write("This is a test!\nThis is a second line (separated with a new-line feed)")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
The file has been created! Let's check this out
#don't worry if you don't understand this code! #We're simply listing the content of the current directory... import os os.listdir()
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
But before we can do anything (e.g. open it with your favorite text editor) you have to close the file-handler!
out.close()
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's look at its content
with open("test.txt") as f: print(f.read())
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Again, also for writing we can use a with statement, which is very handy. But let's have a look at what happens here, so we understand a bit better why "write mode" must be used carefully!
with open("test.txt", "w") as out: out.write("Oooops! new content")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Let's have a look at the content of "test.txt" now
with open("test.txt") as f: print(f.read())
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
See? After we opened the file in "write mode" for the second time, all content of the file was erased and replaced with the new content that we wrote!!! So keep in mind: when you open a file in "w" mode: if it doesn't exist, a new file with that name is created if it does exist, it is completely overwritten and all previous content is lost If you want to write content to an existing file without losing its pervious content, you have to open the file with the "a" mode:
with open("test.txt", "a") as out: out.write('''\nAnd this is some additional content. The new content is appended at the bottom of the existing file''') with open("test.txt") as f: print(f.read())
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Functions Above, we have opened a file several times to inspect its content. Each time, we had to type the same code over and over. This is the typical case where you would like to save some typing (and write code that is much easier to maintain!) by defining a function A function is a block of reusable code that can be invoked to perform a definite task. Most often (but not necessarily), it accepts one or more arguments and return a certain value. We have already seen one of the built-in functions of Python: print("some str") But it's actually very easy to define your own. Let's define the function to print out the file content, as we said before. Note that this function takes one argument (the file name) and prints out some text, but doesn't return back any value.
def printFileContent(file_name): #the function takes one argument: file_name with open(file_name) as f: print(f.read())
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
As usual, mind the indent! file_name (line 1) is the placeholder that we use in the function for any argument that we want to pass to the function in our real-life reuse of the code. Now, if we want to use our function we simply call it with the file name that we want to print out
printFileContent("README.md")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Now, let's see an example of a function that returns some value to the users. Those functions typically take some argument, process them and yield back the result of this processing. Here's the easiest example possible: a function that takes two numbers as arguments, sum them and returns the result.
def sumTwoNumbers(first_int, second_int): s = first_int + second_int return s #could be even shorter: def sumTwoNumbers(first_int, second_int): return first_int + second_int sumTwoNumbers(5, 6)
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Most often, you want to assign the result returned to a variable, so that you can go on working with the results...
s = sumTwoNumbers(5,6) s * 2
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Error and exceptions Things can go wrong, especially when you're a beginner. But no panic! Errors and exceptions are actually a good thing! Python gives you detailed reports about what is wrong, so read them carefully and try to figure out what is not right. Once you're getting better, you'll actually learn that you can do something good with the exceptions: you'll learn how to handle them, and to anticipate some of the most common problems that dirty data can face you with... Now, what happens if you forget the all-important syntactic constraint of the code indent?
if 1 > 0: print("Well, we know that 1 is bigger than 0!")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
Pretty clear, isn't it? What you get is an error a construct that is not grammatical in Python's syntax. Note that you're also told where (at what line, and at what point of the code) your error is occurring. That is not always perfect (there are cases where the problem is actually occuring before what Python thinks), but in this case it's pretty OK. What if you forget to define a variable (or you misspell the name of a variable)?
var = "bla bla" if var1: print("If you see me, then I was defined...")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
You get an exception! The syntax of your code is right, but the execution met with a problem that caused the program to stop. Now, in your program, you can handle selected exception: this means that you can write your code in a way that the program would still be executed even if a certain exception is raised. Let's see what happens if we use our function to try to print the content of a file that doesn't exist:
printFileContent("file_that_is_not_there.txt")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
We get a FileNotFoundError! Now, let's re-write the function so that this event (somebody uses the function with a wrong file name) is taken care of...
def printFileContent(file_name): #the function takes one argument: file_name try: with open(file_name) as f: print(f.read()) except FileNotFoundError: print("The file does not exist.\nNevertheless, I do like you, and I will print something to you anyway...") printFileContent("file_that_doesnt_exist.txt")
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-GB.ipynb
mromanello/SunoikisisDC_NER
gpl-3.0
We first define a function to prepare the datas in the format of keras (theano). The function also reduces the size of the imagesfrom 100X100 to 32X32.
def prep_datas(xset,xlabels): X=list(xset) for i in range(len(X)): X[i]=resize(X[i],(32,32,1)) #reduce the size of the image from 100X100 to 32X32. Also flattens the color levels X=np.reshape(X,(len(X),1,32,32)) # reshape the liste to have the form required by keras (theano), ie (1,32,32) X=np.array(X) #transforms it into an array Y = np.eye(2, dtype='uint8')[xlabels] # generates vectors, here of two elements as required by keras (number of classes) return X,Y
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
peterwittek/qml-rg
gpl-3.0
We then load the training set and the test set and prepare them with the function prep_datas.
training_set, training_labels = im.load_images(path_train) test_set, test_labels = im.load_images(path_test) X_train,Y_train=prep_datas(training_set,training_labels) X_test,Y_test=prep_datas(test_set,test_labels)
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
peterwittek/qml-rg
gpl-3.0
Image before/after compression
i=11 plt.subplot(1,2,1) plt.imshow(training_set[i],cmap='gray') plt.subplot(1,2,2) plt.imshow(X_train[i][0],cmap='gray')
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
peterwittek/qml-rg
gpl-3.0
Lenet neural network
# import the necessary packages from keras.models import Sequential from keras.layers.convolutional import Convolution2D from keras.layers.convolutional import MaxPooling2D from keras.layers.core import Activation from keras.layers.core import Flatten from keras.layers.core import Dense from keras.optimizers import SGD # this code comes from http://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/ class LeNet: @staticmethod def build(width, height, depth, classes, weightsPath=None): # initialize the model model = Sequential() # first set of CONV => RELU => POOL model.add(Convolution2D(20, 5, 5, border_mode="same",input_shape=(depth, height, width))) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) # second set of CONV => RELU => POOL model.add(Convolution2D(50, 5, 5, border_mode="same")) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) # set of FC => RELU layers model.add(Flatten()) model.add(Dense(500)) model.add(Activation("relu")) # softmax classifier model.add(Dense(classes)) model.add(Activation("softmax")) # return the constructed network architecture return model
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
peterwittek/qml-rg
gpl-3.0
We build the neural network and fit it on the training set
model = LeNet.build(width=32, height=32, depth=1, classes=2) opt = SGD(lr=0.01)#Sochastic gradient descent with learning rate 0.01 model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"]) model.fit(X_train, Y_train, batch_size=10, nb_epoch=300,verbose=1) y_pred = model.predict_classes(X_test) print(y_pred) print(test_labels)
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
peterwittek/qml-rg
gpl-3.0
We now compare with the real world images (with the deshear method)
real_world_set=[] for i in np.arange(1,73): filename=path+'images/real_world/'+str(i)+'.png' real_world_set.append(im.deshear(filename)) fake_label=np.ones(len(real_world_set),dtype='int32') X_real,Y_real=prep_datas(real_world_set,fake_label) y_pred = model.predict_classes(X_real)
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
peterwittek/qml-rg
gpl-3.0
with the labels of Peter
f=open(path+'images/real_world/labels.txt',"r") lines=f.readlines() result=[] for x in lines: result.append((x.split(' ')[1]).replace('\n','')) f.close() result=np.array([int(x) for x in result]) result[result>1]=1 plt.plot(y_pred,'o') plt.plot(2*result,'o') plt.ylim(-0.5,2.5);
Archiv_Session_Spring_2017/Exercises/05_aps_capcha.ipynb
peterwittek/qml-rg
gpl-3.0
下準備 データのインポート, 基礎統計量の表示
# csvをインポート df = pd.read_csv( 'odakyu-mansion.csv' ) # 基礎統計量を表示 print(df.describe())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
家の向きは東, 西, 南, 北のダミー変数(0または1。南東の場合、南と東の両方に1)に分解し変換して、
# サンプルサイズ data_len = df.shape[0] # 家の向きはdummyに df['d_N'] = np.zeros(data_len, dtype=float) df['d_E'] = np.zeros(data_len, dtype=float) df['d_W'] = np.zeros(data_len, dtype=float) df['d_S'] = np.zeros(data_len, dtype=float) for i, row in df.iterrows(): for direction in ["N", "W", "S", "E"]: if direction in str(row.muki): df.loc[i, 'd_{0}'.format(direction)] = 1 # 先頭10件を表示 print(df.head(10))
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
欠損値は平均値で置き換える。
df = df.fillna(df.mean())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
最小二乗法 被説明変数に与える影響の小さい説明変数を順に取り除いていく。具体的にはp > 0.05であるような説明変数を除いていく。 同時に、外れ値の考慮もする。 最小二乗法その1 説明変数13個で最小二乗法を実行すると、
# 定数項も加える X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'bal', 'kosuu', 'floor', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']]) # 普通の最小二乗法 model = sm.OLS(df.price, X) results = model.fit() # 結果を表示 print(results.summary())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
となる。 p値を見ると、kosuu, floorがほとんど無関係であるように見える。 kosuuには外れ値が1つある(kosuu=2080)ので、それを除いてみる。 最小二乗法その2 外れ値を除き、
print(df.loc[161]) df = df.drop(161)
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
再び最小二乗法を実行すると、
X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'bal', 'kosuu', 'floor', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']]) model = sm.OLS(df.price, X) results = model.fit() print(results.summary())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
やはりkosuu, floorのp値が大きいので、説明変数から除くと、 最小二乗法その3
X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'bal', 'tf', 'd_N', 'd_E', 'd_S', 'd_W', 'year']]) model = sm.OLS(df.price, X) results = model.fit() print(results.summary())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
となる。さらに、balと南向き以外の方角のダミー変数を説明変数から除く。 最小二乗法その4
X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'tf', 'year', 'd_S']]) model = sm.OLS(df.price, X) results = model.fit() print(results.summary())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
p値が大きい南向きのダミー変数d_E、築年数yearも説明変数から除く。 最小二乗法その5
X = sm.add_constant(df[['time', 'bus', 'walk', 'area', 'tf']]) model = sm.OLS(df.price, X) results = model.fit() print(results.summary())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
p値が大きいtfを取り除く 最小二乗法その6
X = sm.add_constant(df[['time', 'bus', 'walk', 'area']]) model = sm.OLS(df.price, X) results = model.fit() print(results.summary())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
修正済みR^2 = 0.783, F統計量のp値3.96e-59 を見ると、「新宿駅からの乗車時間」, 「バスの乗車時間」, 「徒歩時間」, 「部屋の広さ」の4つで十分に住宅価格を説明できていると考えられる。 最小二乗法1〜6と比較しても、AIC・BICは殆ど変わらないか、改善している。 あとは残差を検討して、誤差項に関する諸仮定が満たされているかをチェックする。 残差の分析 残差に関する仮定は: 誤差項の平均が0 誤差項の分散が一定 誤差項は互いに独立 誤差項は(少なくとも近似的には)正規分布に従う 誤差項と各説明変数の相関係数は0 であった。 ※全ての項目を厳密にチェックする方法を知らないので、出来る項目だけを確認します。 まずは、横軸に予測値(価格)を、縦軸に残差をとって点をプロットする。
# 回帰に使った変数だけを抜き出す new_df = df.loc[:, ['price', 'time', 'bus', 'walk', 'area']] # 説明変数行列 exp_matrix = new_df.loc[:, ['time', 'bus', 'walk', 'area']] # 回帰係数ベクトル coefs = results.params # 理論価格ベクトル predicted = exp_matrix.dot(coefs[1:]) + coefs[0] # 残差ベクトル residuals = new_df.price - predicted # 残差をplot fig, ax = plt.subplots(figsize=(12, 8)) plt.plot(predicted, residuals, 'o', color='b', linewidth=1, label="residuals distribution") plt.xlabel("predicted values") plt.ylabel("residuals") plt.show() # 残差平均 print("residuals mean:", residuals.mean())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
平均はほぼ0であり、グラフでも0付近に点が集中していることがわかる: 仮定1は満たす しかしながら、右側にいくつか外れ値が見える。右上の1点を除いて、再度回帰分析を行う。 最小二乗法その7
print(new_df.loc[12] ) new_df = new_df.drop(12) X = sm.add_constant(new_df[['time', 'bus', 'walk', 'area']]) model = sm.OLS(new_df.price, X) results = model.fit() print(results.summary()) # 説明変数行列 exp_matrix = new_df.loc[:, ['time', 'bus', 'walk', 'area']] # 回帰係数ベクトル coefs = results.params # 理論価格ベクトル predicted = exp_matrix.dot(coefs[1:]) + coefs[0] # 残差ベクトル residuals = new_df.price - predicted # 残差をplot fig, ax = plt.subplots(figsize=(12, 8)) plt.plot(predicted, residuals, 'o', color='b', linewidth=1, label="residuals distribution") plt.xlabel("predicted values") plt.ylabel("residuals") plt.show() # 残差平均 print("residuals mean:", residuals.mean())
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
最小二乗法6の結果に比べ、ばらつきが均等になった。 次に、縦軸に残差、横軸に各説明変数の観測値をとって、残差のばらつきを見る。
# 残差をplot fig = plt.figure(figsize=(18, 10)) ax1 = plt.subplot(2, 2, 1) plt.plot(exp_matrix['time'], residuals, 'o', color='b', linewidth=1, label="residuals - time") plt.xlabel("time") plt.ylabel("residuals") plt.legend() ax2 = plt.subplot(2, 2, 2, sharey=ax1) plt.plot(exp_matrix['bus'], residuals, 'o', color='b', linewidth=1, label="residuals - bus") plt.xlabel("bus") plt.ylabel("residuals") plt.legend() ax3 = plt.subplot(2, 2, 3, sharey=ax1) plt.plot(exp_matrix['walk'], residuals, 'o', color='b', linewidth=1, label="residuals - walk") plt.xlabel("walk") plt.ylabel("residuals") plt.legend() ax4 = plt.subplot(2, 2, 4, sharey=ax1) plt.plot(exp_matrix['area'], residuals, 'o', color='b', linewidth=1, label="residuals - area") plt.xlabel("area") plt.ylabel("residuals") plt.legend() plt.show()
応用統計/HW1/HW1.ipynb
myuuuuun/various
mit
Now let's investigate the mean and standard deviation for the binomial distribution further. The mean of a binomial distribution is simply: $$\mu=np$$ This intuitively makes sense, the average number of successes should be the total trials multiplied by your average success rate. Similarly we can see that the standard deviation of a binomial is: $$\sigma=\sqrt{nq*p}$$ Let's try another example to see the full PMF (Probability Mass Function) plot. Imagine you flip a fair coin. Your probability of getting a heads is p=0.5 (success in this example). So what does your probability mass function look like for 10 coin flips?
import numpy as np # Set up a new example, let's say n= 10 coin flips and p=0.5 for a fair coin. n=10 p=0.5 # Set up n success, remember indexing starts at 0, so use n+1 x = range(n+1) # Now create the probability mass function Y = binom.pmf(x,n,p) #Show Y # Next we'll visualize the pmf by plotting it.
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Finally we will plot the binomial distribution.
import matplotlib.pyplot as plt %matplotlib inline # For simple plots, matplotlib is fine, seaborn is unnecessary. # Now simply use plot plt.plot(x,Y,'o') #Title (use y=1.08 to raise the long title a little more above the plot) plt.title('Binomial Distribution PMF: 10 coin Flips, Odds of Success for Heads is p=0.5',y=1.08) #Axis Titles plt.xlabel('Number of Heads') plt.ylabel('Probability')
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Looks awfully bell shaped... Going further Suppose you play a Blackjack, and have a 50\% chance of winning. You start with a 1 dollar bet, and if you lose, you double the amount you bet on the next play. So if you play three rounds, and Lose, Lose, Win, then you lost 1 dollar on the first round, 2 dollars on the second round, but gained 4 dollars on the first round, for a net profit of 1 dollar. What is the expected value of your winnings (assuming you play as many rounds as it takes until you win)? In fact, are you guaranteed to make money? Why might this be a bad idea in practice? Note: this is a famous strategy known as the Gambler's Ruin The Normal Distribution Next we will talk about the normal distribution. This is the most important continuous distribution. It is also called the Gaussian distribution, or the bell curve. While the binomial distribution is often considered the most basic discrete distribution, the normal is the most fundamental of all continuous random variables.
from IPython.display import Image Image(url='https://static.squarespace.com/static/549dcda5e4b0a47d0ae1db1e/54a06d6ee4b0d158ed95f696/54a06d70e4b0d158ed960413/1412514819046/1000w/Gauss_banknote.png')
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Now we define the normal pdf. The first equation below is the pdf for the normal distribution with mean $\mu$ and variance $\sigma^2$. The second equations is the standard normal $\Phi$ with mean $0$ and variance $1$. We can always transform our random variable $X \sim {\mathcal {N}}(\mu ,\,\sigma ^{2})$ to the standard normal $Z \sim {\mathcal {N}}(0 , \, 1)$ by using the change of variables formula in the third equation. $$ f(x\;|\;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\;e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}} $$ $$ f(x,\mu,\sigma) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{z^2}{2}} $$ $$ z=\frac{(X-\mu)}{\sigma} $$ The plot of the pdf may be familiar to some of you by now.
#Import import matplotlib as mpl %matplotlib inline #Import the stats library from scipy import stats # Set the mean mn = 0 #Set the standard deviation std_dev = 1 # Create a range X = np.arange(-4,4,0.001) #Create the normal distribution for the range Y = stats.norm.pdf(X,mn,std_dev) # plt.plot(X,Y) from IPython.display import Image Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/2/25/The_Normal_Distribution.svg/725px-The_Normal_Distribution.svg.png')
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
The bell curve, centered at the mean, also shows the variance with how wide the bell is. The x-axis gives the realized values of the random variable, and the y values of the curve give the probability these values are realized by the random variable/experiment. This image does a great job of giving a good interpretation of what percent of the outcomes lie within $n$ standard deviations of the mean. Using python, we can draw samples from a normal distribution and plot these samples using a histogram. The idea is that if we take enough samples, this histogram should look like the bell curve. We first start with 30, then up it to 1000. The histogram gives you a good idea of what the pdf of these samples will look like, but just as a visual aid we also plot a pdf estimator in blue. This is called a kernel density estimator, but it is beyond the scope of this tutorial. We plot the normal distribution these samples came from in green.
#Set the mean and the standard deviaiton mu,sigma = 0,1 # Now grab 30 random numbers from the normal distribution norm_set = np.random.normal(mu,sigma,30) #Now let's plot it using seaborn import seaborn as sns import sklearn as sk from scipy.stats import gaussian_kde results, edges = np.histogram(norm_set, normed=True) binWidth = edges[1] - edges[0] plt.bar(edges[:-1], results*binWidth, binWidth) density = gaussian_kde(norm_set) density.covariance_factor = lambda : .4 #this is the bandwidth in the kernel density estimator density._compute_covariance() plt.plot(X,density(X)) plt.plot(X,Y)
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
With enough samples, this should start to look normal.
norm2 = np.random.normal(mu, sigma, 1000) results, edges = np.histogram(norm2, normed=True) binWidth = edges[1] - edges[0] plt.bar(edges[:-1], results*binWidth, binWidth) density = gaussian_kde(norm2) density.covariance_factor = lambda : .4 density._compute_covariance() plt.plot(X,density(X)) plt.plot(X,Y)
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Central Limit Theorem The Central Limit Theorem is one of the most important theorems in statistical theory. It states that when independent random variables are added, their sum tends toward a normal distribution even if the original variables themselves are not normally distributed. This may seem obscure so we will explain it with the previous example. We saw that as we took more samples from the normal distribution, the total distribution looks more and more normal. Here we will take more and more samples from a normal, and see how the mean of the samples behaves.
n10 = np.random.normal(mu, sigma, 10) n100 = np.random.normal(mu, sigma, 100) n1000 = np.random.normal(mu, sigma, 1000) n10000 = np.random.normal(mu, sigma, 10000) print(n10.mean(), n100.mean(), n1000.mean(), n10000.mean() )
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
We see that as we add more samples, the mean approaches 0, which is the true mean. This is the central limit theorem: the sum of a lot of random variables tends towards a Gaussian distribution (under some conditions). The next image shows how adding more samples to a binomial distribution makes it look more and more Gaussian.
Image(url='https://upload.wikimedia.org/wikipedia/commons/8/8c/Dice_sum_central_limit_theorem.svg')
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Student's t distribution For the normal distribution it is often assumed that the sample size is assumed large ($N>30$). The t distribution allows for use of small samples, but does so by sacrificing certainty with a margin-of-error trade-off (i.e. a larger variance). The t distribution takes into account the sample size using n-1 degrees of freedom, which means there is a different t distribution for every different sample size. If we see the t distribution against a normal distribution, you'll notice the tail ends increase as the peak get 'squished' down. To be precise, the t-distribution models the sample mean of $N$ observations that are taken from a normal distrubtion. It's important to note, that as $N$ gets larger, the t distribution converges into a normal distribution. To further explain degrees of freedom and how it relates to the t distribution, you can think of degrees of freedom as an adjustment to the sample size, such as (n-1). This is connected to the idea that we are estimating something of a larger population, in practice it gives a slightly larger margin of error in the estimate. Let's define a new variable called t, where : $$t=\frac{\overline{X}-\mu}{s}\sqrt{N-1}=\frac{\overline{X}-\mu}{s/\sqrt{N}}$$ which is analogous to the z statistic given by $$z=\frac{\overline{X}-\mu}{\sigma/\sqrt{N}}$$ The sampling distribution for t can be obtained: $$ f(t) = \frac {\varGamma(\frac{v+1}{2})}{\sqrt{v\pi}\varGamma(\frac{v}{2})} (1+\frac{t^2}{v})^{-\frac{v+1}{2}}$$ Where the gamma function is: $$\varGamma(n)=(n-1)!$$ And v is the number of degrees of freedom, typically equal to N-1. Please don't worry about these formulas. Literally no one memorizes this distribution. Just know the binomial and the normal, and the idea of what a t distribution is for (small sample sizes). The t distribution is plotted in blue, and the normal distribution is in green as well to compare.
#Import for plots import matplotlib.pyplot as plt %matplotlib inline #Import the stats library from scipy.stats import t #import numpy import numpy as np # Create x range x = np.linspace(-5,5,100) # Create the t distribution with scipy rv = t(3) # Plot the PDF versus the x range plt.plot(x, rv.pdf(x)) plt.plot(X, Y)
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Notice that the t distribution in blue has fatter tails than the normal distribution. This means there is more probability of realizing an observation in that region. As a consequence, there is less area under the peak/it is less spikey. This is a reflection of the fact that we have less certainty in where the observations will land because we have a smaller sample size of evidence to support this estimation of the data's distribution. Multivariate Normal A multivariate normal with unequal variances in the x and y direction and principal axes also rotated from the origin. Notice how each shade or horizontal slice looks like an ellipse. What is multivariate data? It just means more than one dimension. * An example of a one-dimensional random variable is the height of a person randomly chosen from this class * An example of a multi-variate random variable is the (height, shoe-size) of a person randomly chosen from this class For an example like (height, shoe-size), where x=height and y=shoe-size, we expect the two variables to be correlated. Other examples, like (height, first-letter-of-your-name) are not correlated. If the variables are correlated, statistical tests should know about it!
import scipy.stats x, y = np.mgrid[-1:1:.01, -1:1:.01] pos = np.empty(x.shape + (2,)) pos[:, :, 0] = x; pos[:, :, 1] = y rv = scipy.stats.multivariate_normal([0.5, -0.2], [[2.0, 0.3], [0.3, 0.5]]) plt.contourf(x, y, rv.pdf(pos))
notebooks/machineLearning_notebooks/01_Naive_Bayes/Distributions.ipynb
jamesfolberth/NGC_STEM_camp_AWS
bsd-3-clause
Getting the data We're not going into details here
# Import the MNIST dataset mnist = input_data.read_data_sets("/tmp/MNIST/", one_hot=True) x_train = np.reshape(mnist.train.images, (-1, 28, 28, 1)) y_train = mnist.train.labels x_test = np.reshape(mnist.test.images, (-1, 28, 28, 1)) y_test = mnist.test.labels
code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Defining the input function If we look at the image above we can see that there're two main parts in the diagram, a input function interacting with data files and the Estimator interacting with the input function and checkpoints. This means that the estimator doesn't know about data files, it knows about input functions. So if we want to interact with a data set we need to creat an input function that interacts with it, in this example we are creating a input function for the train and test data set. You can learn more about input functions here
BATCH_SIZE = 128 x_train_dict = {'x': x_train } train_input_fn = numpy_io.numpy_input_fn( x_train_dict, y_train, batch_size=BATCH_SIZE, shuffle=True, num_epochs=None, queue_capacity=1000, num_threads=4) x_test_dict = {'x': x_test } test_input_fn = numpy_io.numpy_input_fn( x_test_dict, y_test, batch_size=BATCH_SIZE, shuffle=False, num_epochs=1)
code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Creating an experiment After an experiment is created (by passing an Estimator and inputs for training and evaluation), an Experiment instance knows how to invoke training and eval loops in a sensible fashion for distributed training. More about it here
# parameters LEARNING_RATE = 0.01 STEPS = 1000 # create experiment def generate_experiment_fn(): def _experiment_fn(run_config, hparams): del hparams # unused, required by signature. # create estimator model_params = {"learning_rate": LEARNING_RATE} estimator = tf.estimator.Estimator(model_fn=m.get_model(), params=model_params, config=run_config) train_input = train_input_fn test_input = test_input_fn return tf.contrib.learn.Experiment( estimator, train_input_fn=train_input, eval_input_fn=test_input, train_steps=STEPS ) return _experiment_fn
code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Run the experiment
OUTPUT_DIR = 'output_dir/model1' learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Running a second time Okay, the model is definitely not good... But, check OUTPUT_DIR path, you'll see that a output_dir folder was created and that there are a lot of files there that were created automatically by TensorFlow! So, most of these files are actually checkpoints, this means that if we run the experiment again with the same model_dir it will just load the checkpoint and start from there instead of starting all over again! This means that: If we have a problem while training you can just restore from where you stopped instead of start all over again If we didn't train enough we can just continue to train If you have a big file you can just break it into small files and train for a while with each small file and the model will continue from where it stopped at each time :) This is all true as long as you use the same model_dir! So, let's run again the experiment for more 1000 steps to see if we can improve the accuracy. So, notice that the first step in this run will actually be the step 1001. So, we need to change the number of steps to 2000 (otherwhise the experiment will find the checkpoint and will think it already finished training)
STEPS = STEPS + 1000 learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Tensorboard Another thing we get for free is tensorboard. If you run: tensorboard --logdir=OUTPUT_DIR You'll see that we get the graph and some scalars, also if you use an embedding layer you'll get an embedding visualization in tensorboard as well! So, we can make small changes and we'll have an easy (and totally for free) way to compare the models. Let's make these changes: 1. change the learning rate to 0.05 2. change the OUTPUT_DIR to some path in output_dir/ The 2. is must be inside output_dir/ because we can run: tensorboard --logdir=output_dir/ And we'll get both models visualized at the same time in tensorboard. You'll notice that the model will start from step 1, because there's no existing checkpoint in this path.
LEARNING_RATE = 0.05 OUTPUT_DIR = 'output_dir/model2' learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir=OUTPUT_DIR))
code_samples/estimators-for-free/.ipynb_checkpoints/estimators_for_free-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Load the volume and look at the image (visualization requires window-leveling).
spherical_fiducials_image = sitk.ReadImage(fdata("spherical_fiducials.mha")) sitk.Show(spherical_fiducials_image, "spheres")
33_Segmentation_Thresholding_Edge_Detection.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
After looking at the image you should have identified two spheres. Now select a Region Of Interest (ROI) around the sphere which you want to analyze.
rois = {"ROI1":[range(280,320), range(65,90), range(8, 30)], "ROI2":[range(200,240), range(65,100), range(15, 40)]} mask_value = 255 def select_roi_dropdown_callback(roi_name, roi_dict): global mask, mask_ranges mask_ranges = roi_dict.get(roi_name) if mask_ranges: mask = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt8) mask.CopyInformation(spherical_fiducials_image) for x in mask_ranges[0]: for y in mask_ranges[1]: for z in mask_ranges[2]: mask[x,y,z] = mask_value # Use nice magic numbers for windowing the image. We need to do this as we are alpha blending the mask # with the original image. sitk.Show(sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, windowMaximum=-29611), sitk.sitkUInt8), mask, opacity=0.5)) roi_list = rois.keys() roi_list.insert(0,'Select ROI') interact(select_roi_dropdown_callback, roi_name=roi_list, roi_dict=fixed(rois));
33_Segmentation_Thresholding_Edge_Detection.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Thresholding based approach To see whether this approach is appropriate we look at the histogram of intensity values inside the ROI. We know that the spheres have higher intensity values. Ideally we would have a bimodal distribution with clear separation between the sphere and background.
intensity_values = sitk.GetArrayFromImage(spherical_fiducials_image) roi_intensity_values = intensity_values[mask_ranges[2][0]:mask_ranges[2][-1], mask_ranges[1][0]:mask_ranges[1][-1], mask_ranges[0][0]:mask_ranges[0][-1]].flatten() plt.hist(roi_intensity_values, bins=100) plt.title("Intensity Values in ROI") plt.show()
33_Segmentation_Thresholding_Edge_Detection.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Can you identify the region of the histogram associated with the sphere? In our case it looks like we can automatically select a threshold separating the sphere from the background. We will use Otsu's method for threshold selection to segment the sphere and estimate its radius.
# Set pixels that are in [min_intensity,otsu_threshold] to inside_value, values above otsu_threshold are # set to outside_value. The sphere's have higher intensity values than the background, so they are outside. inside_value = 0 outside_value = 255 number_of_histogram_bins = 100 mask_output = True labeled_result = sitk.OtsuThreshold(spherical_fiducials_image, mask, inside_value, outside_value, number_of_histogram_bins, mask_output, mask_value) # Estimate the sphere radius from the segmented image using the LabelShapeStatisticsImageFilter. label_shape_analysis = sitk.LabelShapeStatisticsImageFilter() label_shape_analysis.SetBackgroundValue(inside_value) label_shape_analysis.Execute(labeled_result) print("The sphere's radius is: {0:.2f}mm".format(label_shape_analysis.GetEquivalentSphericalRadius(outside_value))) # Visually inspect the results of segmentation, just to make sure. sitk.Show(sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, windowMaximum=-29611), sitk.sitkUInt8), labeled_result, opacity=0.5))
33_Segmentation_Thresholding_Edge_Detection.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Based on your visual inspection, did the automatic threshold correctly segment the sphere or did it over/under segment it? If automatic thresholding did not provide the desired result, you can correct it by allowing the user to modify the threshold under visual inspection. Implement this approach below.
# Your code here:
33_Segmentation_Thresholding_Edge_Detection.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Edge detection based approach In this approach we will localize the sphere's edges in 3D using SimpleITK. We then compute the least squares sphere that optimally fits the 3D points using scipy/numpy. The mathematical formulation for this solution is described in this Insight Journal paper.
# Create a cropped version of the original image. sub_image = spherical_fiducials_image[mask_ranges[0][0]:mask_ranges[0][-1], mask_ranges[1][0]:mask_ranges[1][-1], mask_ranges[2][0]:mask_ranges[2][-1]] # Edge detection on the sub_image with appropriate thresholds and smoothing. edges = sitk.CannyEdgeDetection(sitk.Cast(sub_image, sitk.sitkFloat32), lowerThreshold=0.0, upperThreshold=200.0, variance = (5.0,5.0,5.0))
33_Segmentation_Thresholding_Edge_Detection.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Get the 3D location of the edge points and fit a sphere to them.
edge_indexes = np.where(sitk.GetArrayFromImage(edges) == 1.0) # Note the reversed order of access between SimpleITK and numpy (z,y,x) physical_points = [edges.TransformIndexToPhysicalPoint([int(x), int(y), int(z)]) \ for z,y,x in zip(edge_indexes[0], edge_indexes[1], edge_indexes[2])] # Setup and solve linear equation system. A = np.ones((len(physical_points),4)) b = np.zeros(len(physical_points)) for row, point in enumerate(physical_points): A[row,0:3] = -2*np.array(point) b[row] = -linalg.norm(point)**2 res,_,_,_ = linalg.lstsq(A,b) print("The sphere's radius is: {0:.2f}mm".format(np.sqrt(linalg.norm(res[0:3])**2 - res[3]))) # Visually inspect the results of edge detection, just to make sure. Note that because SimpleITK is working in the # physical world (not pixels, but mm) we can easily transfer the edges localized in the cropped image to the original. edge_label = sitk.Image(spherical_fiducials_image.GetSize(), sitk.sitkUInt16) edge_label.CopyInformation(spherical_fiducials_image) e_label = 255 for point in physical_points: edge_label[edge_label.TransformPhysicalPointToIndex(point)] = e_label sitk.Show(sitk.LabelOverlay(sitk.Cast(sitk.IntensityWindowing(spherical_fiducials_image, windowMinimum=-32767, windowMaximum=-29611), sitk.sitkUInt8), edge_label, opacity=0.5))
33_Segmentation_Thresholding_Edge_Detection.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
As always, let's do imports and initialize a logger and a new bundle.
import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary()
2.3/examples/animation_binary_complete.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Datasets
times = np.linspace(0,1,21) b.add_dataset('lc', times=times, dataset='lc01') b.add_dataset('rv', times=times, dataset='rv01') b.add_dataset('mesh', times=times, columns=['visibilities', 'intensities@lc01', 'rvs@rv01'], dataset='mesh01')
2.3/examples/animation_binary_complete.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Plotting See the Animations Tutorial for more examples and details. Here we'll create a figure with multiple subplots. The top row will be the light curve and RV curve. The bottom three subplots will be various representations of the mesh (intensities, rvs, and visibilities). We'll do this by making separate calls to plot, passing the matplotlib subplot location for each axes we want to create. We can then call b.show(animate=True) or b.save('anim.gif', animate=True).
b['lc01@model'].plot(axpos=221) b['rv01@model'].plot(c={'primary': 'blue', 'secondary': 'red'}, linestyle='solid', axpos=222) b['mesh@model'].plot(fc='intensities@lc01', ec='None', axpos=425) b['mesh@model'].plot(fc='rvs@rv01', ec='None', axpos=427) b['mesh@model'].plot(fc='visibilities', ec='None', y='ws', axpos=224) fig = plt.figure(figsize=(11,4)) afig, mplanim = b.savefig('animation_binary_complete.gif', fig=fig, tight_layouot=True, draw_sidebars=False, animate=True, save_kwargs={'writer': 'imagemagick'})
2.3/examples/animation_binary_complete.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
# read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader if len(each) > 0]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1))
week2/vgg_transfer_imagenet_to_flower/transfer_learning_python.ipynb
infilect/ml-course1
mit
Plot dynamics functions
if data_dim == 2: lim = 5 x = np.linspace(-lim, lim, 10) y = np.linspace(-lim, lim, 10) X, Y = np.meshgrid(x, y) xy = np.column_stack((X.ravel(), Y.ravel())) fig, axs = plt.subplots(1, num_states, figsize=(3 * num_states, 6)) for k in range(num_states): A, b = weights[k], biases[k] dxydt_m = xy.dot(A.T) + b - xy axs[k].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[k % len(colors)]) axs[k].set_xlabel("$y_1$") # axs[k].set_xticks([]) if k == 0: axs[k].set_ylabel("$y_2$") # axs[k].set_yticks([]) axs[k].set_aspect("equal") plt.tight_layout() plt.savefig("arhmm-flow-matrices.pdf") colors print(stationary_points)
deprecated/arhmm_example.ipynb
probml/pyprobml
mit
Sample data from the ARHMM
# Make an Autoregressive (AR) HMM true_initial_distribution = tfp.distributions.Categorical(logits=np.zeros(num_states)) true_transition_distribution = tfp.distributions.Categorical(probs=transition_matrix) true_arhmm = GaussianARHMM( num_states, transition_matrix=transition_matrix, emission_weights=weights, emission_biases=biases, emission_covariances=covariances, ) time_bins = 10000 true_states, data = true_arhmm.sample(jr.PRNGKey(0), time_bins) fig = plt.figure(figsize=(8, 8)) for k in range(num_states): plt.plot(*data[true_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3) plt.plot(*data[:1000].T, "-k", lw=0.5, alpha=0.2) plt.xlabel("$y_1$") plt.ylabel("$y_2$") # plt.gca().set_aspect("equal") plt.savefig("arhmm-samples-2d.pdf") fig = plt.figure(figsize=(8, 8)) for k in range(num_states): ndx = true_states == k data_k = data[ndx] T = 12 data_k = data_k[:T, :] plt.plot(data_k[:, 0], data_k[:, 1], "o", color=colors[k], alpha=0.75, markersize=3) for t in range(T): plt.text(data_k[t, 0], data_k[t, 1], t, color=colors[k], fontsize=12) # plt.plot(*data[:1000].T, '-k', lw=0.5, alpha=0.2) plt.xlabel("$y_1$") plt.ylabel("$y_2$") # plt.gca().set_aspect("equal") plt.savefig("arhmm-samples-2d-temporal.pdf") print(biases) print(stationary_points) colors
deprecated/arhmm_example.ipynb
probml/pyprobml
mit
Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.
lim # Plot the data and the smoothed data plot_slice = (0, 200) lim = 1.05 * abs(data).max() plt.figure(figsize=(8, 6)) plt.imshow( true_states[None, :], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors) - 1, extent=(0, time_bins, -lim, (data_dim) * lim), ) Ey = np.array(stationary_points)[true_states] for d in range(data_dim): plt.plot(data[:, d] + lim * d, "-k") plt.plot(Ey[:, d] + lim * d, ":k") plt.xlim(plot_slice) plt.xlabel("time") # plt.yticks(lim * np.arange(data_dim), ["$y_{{{}}}$".format(d+1) for d in range(data_dim)]) plt.ylabel("observations") plt.tight_layout() plt.savefig("arhmm-samples-1d.pdf") data.shape data[:10, :]
deprecated/arhmm_example.ipynb
probml/pyprobml
mit
Fit an ARHMM
# Now fit an HMM to the data key1, key2 = jr.split(jr.PRNGKey(0), 2) test_num_states = num_states initial_distribution = tfp.distributions.Categorical(logits=np.zeros(test_num_states)) transition_distribution = tfp.distributions.Categorical(logits=np.zeros((test_num_states, test_num_states))) emission_distribution = GaussianLinearRegression( weights=np.tile(0.99 * np.eye(data_dim), (test_num_states, 1, 1)), bias=0.01 * jr.normal(key2, (test_num_states, data_dim)), scale_tril=np.tile(np.eye(data_dim), (test_num_states, 1, 1)), ) arhmm = GaussianARHMM(test_num_states, data_dim, num_lags, seed=jr.PRNGKey(0)) lps, arhmm, posterior = arhmm.fit(data, method="em") # Plot the log likelihoods against the true likelihood, for comparison true_lp = true_arhmm.marginal_likelihood(data) plt.plot(lps, label="EM") plt.plot(true_lp * np.ones(len(lps)), ":k", label="True") plt.xlabel("EM Iteration") plt.ylabel("Log Probability") plt.legend(loc="lower right") plt.show() # # Find a permutation of the states that best matches the true and inferred states # most_likely_states = posterior.most_likely_states() # arhmm.permute(find_permutation(true_states[num_lags:], most_likely_states)) # posterior.update() # most_likely_states = posterior.most_likely_states() if data_dim == 2: lim = abs(data).max() x = np.linspace(-lim, lim, 10) y = np.linspace(-lim, lim, 10) X, Y = np.meshgrid(x, y) xy = np.column_stack((X.ravel(), Y.ravel())) fig, axs = plt.subplots(2, max(num_states, test_num_states), figsize=(3 * num_states, 6)) for i, model in enumerate([true_arhmm, arhmm]): for j in range(model.num_states): dist = model._emissions._distribution[j] A, b = dist.weights, dist.bias dxydt_m = xy.dot(A.T) + b - xy axs[i, j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)]) axs[i, j].set_xlabel("$x_1$") axs[i, j].set_xticks([]) if j == 0: axs[i, j].set_ylabel("$x_2$") axs[i, j].set_yticks([]) axs[i, j].set_aspect("equal") plt.tight_layout() plt.savefig("argmm-flow-matrices-true-and-estimated.pdf") if data_dim == 2: lim = abs(data).max() x = np.linspace(-lim, lim, 10) y = np.linspace(-lim, lim, 10) X, Y = np.meshgrid(x, y) xy = np.column_stack((X.ravel(), Y.ravel())) fig, axs = plt.subplots(1, max(num_states, test_num_states), figsize=(3 * num_states, 6)) for i, model in enumerate([arhmm]): for j in range(model.num_states): dist = model._emissions._distribution[j] A, b = dist.weights, dist.bias dxydt_m = xy.dot(A.T) + b - xy axs[j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)]) axs[j].set_xlabel("$y_1$") axs[j].set_xticks([]) if j == 0: axs[j].set_ylabel("$y_2$") axs[j].set_yticks([]) axs[j].set_aspect("equal") plt.tight_layout() plt.savefig("arhmm-flow-matrices-estimated.pdf") # Plot the true and inferred discrete states plot_slice = (0, 1000) plt.figure(figsize=(8, 4)) plt.subplot(211) plt.imshow(true_states[None, num_lags:], aspect="auto", interpolation="none", cmap=cmap, vmin=0, vmax=len(colors) - 1) plt.xlim(plot_slice) plt.ylabel("$z_{\\mathrm{true}}$") plt.yticks([]) plt.subplot(212) # plt.imshow(most_likely_states[None,: :], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors)-1) plt.imshow(posterior.expected_states[0].T, aspect="auto", interpolation="none", cmap="Greys", vmin=0, vmax=1) plt.xlim(plot_slice) plt.ylabel("$z_{\\mathrm{inferred}}$") plt.yticks([]) plt.xlabel("time") plt.tight_layout() plt.savefig("arhmm-state-est.pdf") # Sample the fitted model sampled_states, sampled_data = arhmm.sample(jr.PRNGKey(0), time_bins) fig = plt.figure(figsize=(8, 8)) for k in range(num_states): plt.plot(*sampled_data[sampled_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3) # plt.plot(*sampled_data.T, '-k', lw=0.5, alpha=0.2) plt.plot(*sampled_data[:1000].T, "-k", lw=0.5, alpha=0.2) plt.xlabel("$x_1$") plt.ylabel("$x_2$") # plt.gca().set_aspect("equal") plt.savefig("arhmm-samples-2d-estimated.pdf")
deprecated/arhmm_example.ipynb
probml/pyprobml
mit
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the <EOS> word id by doing: python target_vocab_to_int['<EOS>'] You can get other word ids using source_vocab_to_int and target_vocab_to_int.
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function source_split, target_split = source_text.split('\n'), target_text.split('\n') source_to_int, target_to_int = [], [] for source, target in zip(source_split, target_split): source_to_int.append([source_vocab_to_int[word] for word in source.split()]) targets = [target_vocab_to_int[word] for word in target.split()] targets.append((target_vocab_to_int['<EOS>'])) target_to_int.append(targets) #print(source_to_int, target_to_int) return source_to_int, target_to_int """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ # TODO: Implement Function #max_tar_seq_len = np.max([len(sentence) for sentence in target_int_text]) #max_sour_seq_len = np.max([len(sentence) for sentence in source_int_text]) #max_source_len = np.max([max_tar_seq_len, max_sour_seq_len]) inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None]) learning_rate = tf.placeholder(tf.float32) keep_probability = tf.placeholder(tf.float32, name='keep_prob') target_seq_len = tf.placeholder(tf.int32, [None], name='target_sequence_length') max_target_seq_len = tf.reduce_max(target_seq_len, name='target_sequence_length') source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length') return inputs, targets, learning_rate, keep_probability, target_seq_len, max_target_seq_len, source_seq_len """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn()
from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ # TODO: Implement Function embed_seq = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) def lstm_cell(): return tf.contrib.rnn.LSTMCell(rnn_size) rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)]) rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob) output, state = tf.nn.dynamic_rnn(rnn, embed_seq, dtype=tf.float32) return output, state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ # TODO: Implement Function training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length) train_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) output, _ = tf.contrib.seq2seq.dynamic_decode(train_decoder, impute_finished=False, maximum_iterations=max_summary_length) return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ # TODO: Implement Function start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference.
def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function #embed_seq = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) def lstm_cell(): return tf.contrib.rnn.LSTMCell(rnn_size) rnn = tf.contrib.rnn.MultiRNNCell([lstm_cell() for i in range(num_layers)]) rnn = tf.contrib.rnn.DropoutWrapper(rnn, output_keep_prob=keep_prob) output_layer = Dense(target_vocab_size, kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)) with tf.variable_scope("decode"): training_output = decoding_layer_train(encoder_state, rnn, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): inference_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_output, inference_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :max_target_sentence_length: Maximum target sequence lenght :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ # TODO: Implement Function _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_output, inference_output = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_output, inference_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement
# Number of Epochs epochs = 10 # Batch Size batch_size = 128 # RNN Size rnn_size = 254 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.5 display_step = 10
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
""" DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved')
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function sentence = sentence.lower() sentence_to_id = [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split(' ')] return sentence_to_id """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq)
language-translation/dlnd_language_translation.ipynb
hvillanua/deep-learning
mit
Turn the relative refference of the current directory, '.' into an absolute path. Then specify the file name that we're looking for. And in a while-loop look upward in an ever higher diectory in the directory tree until we found that with our file. Walking upward in the tree is done by cutting off the tail of the path in each step.
apth = os.path.abspath('.') fname = 'README.md' print("Starting in folder <{}>,\n to look for file <{}> ...".format(apth, fname)) print() print("I'm searching: ...") while not fname in os.listdir(apth): apth = apth.rpartition(os.sep)[0] print(apth) if fname in os.listdir(apth): print("... Yep, got'm!") else: print("... H'm, missed him") print("\nOk, file <{}> is in folder: ".format(fname), apth) print('\nHere is the list of files in this folder:') os.listdir(apth)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Reading the file (at once) When we know where the file is, we can open it for reading. We have to open it, which yields a reader object, by which we can read the file. reader = open(path, 'r') s = reader.read() reader.close() Problem with this, is that when we are exploring the reader, we may easily reach the end of file after which nothing more is read and s is an empty string. Furthermore, when we experiment, we may easily open the same file many times and forget to close it. The with statement is a solution to that, because it automatically closes the file when we finish its block. With the with statement we may read the entire file into a string like so.
with open(os.path.join(apth, fname), 'r') as reader: s = reader.read()
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
It's the read that swallows the entire file at once and dumps its contents in the string s. Check if the reader in, indeed, closed after we finished the with block:
reader.closed
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Then show the contenct of the sring s:
print(s)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Counting words and phrases Now that we have the entire file read into a single string, s, we can just as well analyze it a bit, by counting the number of words, letters, and the frequency of each letter. just split the sting in workds based on whitespace and count their number.
print("There are {} words in file {}".format(len(s.split(sep=' ')), fname))
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
We might estimate the number of sentences by counting the number of periods '.' One way is to use the . as a separator:
nPhrases = len(s.split(sep='.')) # also works without the keyword sep print("We find {} phrases in file {}".format(nPhrases, fname))
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
We could just as wel count the number of dots in s directly, using one of the string methods, in this case s.count()
print("There are {} dots in file {}".format(s.count('.'), fname))
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
Counting the non-whitespace characters Now let's see how many non-whitespace characters there are in s. A coarse way to remove whitespace would be splitting s and rejoining the obtained list of words without any whitespace like so:
s1 = "".join(s.split()).lower() # also make all letters in lowerface() print(s1)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0
All characters in a list If we convert a string into a list, we get the list of its individual characters.
list(s1)
exercises/Mar07/readingText.ipynb
Olsthoorn/IHE-python-course-2017
gpl-2.0