markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Neural Network
#building the model model = models.Sequential() model.add(layers.Dense(128, input_shape=(X_train.shape[1],), activation='relu')) model.add(layers.Dense(256, activation='relu')) model.add(layers.Dense(256, activation='relu')) model.add(layers.Dense(1, activation='linear')) #compiling the model model.compile(optimizer='adam', loss='mse', metrics=[r2_keras]) #model summary print(model.summary()) # Training the model model_start = time.time() model_history = model.fit(X_train, y_train, epochs=500, batch_size=256, validation_data=(X_test, y_test)) model_end = time.time() print(f"Time taken to run: {round((model_end - model_start)/60,1)} minutes") #evaluate model loss_train = model_history.history['loss'] loss_val = model_history.history['val_loss'] plt.figure(figsize=(8,6)) plt.plot(model_history.history['loss']) plt.plot(model_history.history['val_loss']) plt.title('Training and Test loss at each epoch') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() score_train = model.evaluate(X_train, y_train, verbose = 0) score_test = model.evaluate(X_test, y_test, verbose = 0) train_r2 = round(score_train[1], 4) test_r2 = round(score_test[1], 4) train_mse = round(score_train[0], 4) test_mse = round(score_test[0], 4) metrics_df = appendToMetricsdf(metrics_df, "Neural Network", train_r2, test_r2, train_mse, test_mse)
_____no_output_____
Apache-2.0
AirBNB_Analysis_San_Francisco.ipynb
zaveta/AirBNB-Analysis-San-Francisco
Evaluate the Results Let's take a look at our results and compare them with each other.
metrics_df
_____no_output_____
Apache-2.0
AirBNB_Analysis_San_Francisco.ipynb
zaveta/AirBNB-Analysis-San-Francisco
[![Azure Notebooks](https://notebooks.azure.com/launch.png)](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision) تولید متن با شبکه بازگشتی LSTM در Kerasکدها برگرفته از فصل هشتم کتاب[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff)و گیت هاب نویسنده کتاب و توسعه دهنده کراس [François Chollet](http://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/8.1-text-generation-with-lstm.ipynb)است.
import keras keras.__version__
Using TensorFlow backend.
MIT
42-text-generation-with-lstm.ipynb
mhsattarian/class.vision
Text generation with LSTM Implementing character-level LSTM text generationLet's put these ideas in practice in a Keras implementation. The first thing we need is a lot of text data that we can use to learn a language model. You could use any sufficiently large text file or set of text files -- Wikipedia, the Lord of the Rings, etc. In this example we will use some of the writings of Nietzsche, the late-19th century German philosopher (translated to English). The language model we will learn will thus be specifically a model of Nietzsche's writing style and topics of choice, rather than a more generic model of the English language. مجموعه داده
import keras import numpy as np path = keras.utils.get_file( 'nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt') text = open(path).read().lower() print('Corpus length:', len(text))
Corpus length: 600901
MIT
42-text-generation-with-lstm.ipynb
mhsattarian/class.vision
Next, we will extract partially-overlapping sequences of length `maxlen`, one-hot encode them and pack them in a 3D Numpy array `x` of shape `(sequences, maxlen, unique_characters)`. Simultaneously, we prepare a array `y` containing the corresponding targets: the one-hot encoded characters that come right after each extracted sequence.
# Length of extracted character sequences maxlen = 60 # We sample a new sequence every `step` characters step = 3 # This holds our extracted sequences sentences = [] # This holds the targets (the follow-up characters) next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('Number of sequences:', len(sentences)) # List of unique characters in the corpus chars = sorted(list(set(text))) print('Unique characters:', len(chars)) # Dictionary mapping unique characters to their index in `chars` char_indices = dict((char, chars.index(char)) for char in chars) # Next, one-hot encode the characters into binary arrays. print('Vectorization...') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1
Number of sequences: 200281 Unique characters: 59 Vectorization...
MIT
42-text-generation-with-lstm.ipynb
mhsattarian/class.vision
ایجاد شبکه (Building the network)Our network is a single `LSTM` layer followed by a `Dense` classifier and softmax over all possible characters. But let us note that recurrent neural networks are not the only way to do sequence data generation; 1D convnets also have proven extremely successful at it in recent times.
from keras import layers model = keras.models.Sequential() model.add(layers.LSTM(128, input_shape=(maxlen, len(chars)))) model.add(layers.Dense(len(chars), activation='softmax'))
_____no_output_____
MIT
42-text-generation-with-lstm.ipynb
mhsattarian/class.vision
Since our targets are one-hot encoded, we will use `categorical_crossentropy` as the loss to train the model:
optimizer = keras.optimizers.RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer)
_____no_output_____
MIT
42-text-generation-with-lstm.ipynb
mhsattarian/class.vision
Training the language model and sampling from itGiven a trained model and a seed text snippet, we generate new text by repeatedly:* 1) Drawing from the model a probability distribution over the next character given the text available so far* 2) Reweighting the distribution to a certain "temperature"* 3) Sampling the next character at random according to the reweighted distribution* 4) Adding the new character at the end of the available textThis is the code we use to reweight the original probability distribution coming out of the model, and draw a character index from it (the "sampling function"):
def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas)
_____no_output_____
MIT
42-text-generation-with-lstm.ipynb
mhsattarian/class.vision
Finally, this is the loop where we repeatedly train and generated text. We start generating text using a range of different temperatures after every epoch. This allows us to see how the generated text evolves as the model starts converging, as well as the impact of temperature in the sampling strategy.
import random import sys for epoch in range(1, 60): print('epoch', epoch) # Fit the model for 1 epoch on the available training data model.fit(x, y, batch_size=128, epochs=1) # Select a text seed at random start_index = random.randint(0, len(text) - maxlen - 1) generated_text = text[start_index: start_index + maxlen] print('--- Generating with seed: "' + generated_text + '"') for temperature in [0.2, 0.5, 1.0, 1.2]: print('------ temperature:', temperature) sys.stdout.write(generated_text) # We generate 400 characters for i in range(400): sampled = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(generated_text): sampled[0, t, char_indices[char]] = 1. preds = model.predict(sampled, verbose=0)[0] next_index = sample(preds, temperature) next_char = chars[next_index] generated_text += next_char generated_text = generated_text[1:] sys.stdout.write(next_char) sys.stdout.flush() print()
epoch 1 Epoch 1/1 200278/200278 [==============================] - 126s - loss: 1.9895 --- Generating with seed: "h they inspire." or, as la rochefoucauld says: "if you think" ------ temperature: 0.2 h they inspire." or, as la rochefoucauld says: "if you think in the sense of the say the same of the antimated and present in the all the has a such and opent and the say and and the fan and the sense of the into the sense of the say the words and the present the sense of the present present of the present in the man is the man in the sense of the say the sense of the say and the say and the say it is the such and the sense of the ast the sense of the say ------ temperature: 0.5 t is the such and the sense of the ast the sense of the say the instand of the way and it is the man for the some songully the sain it is opperience of all the sensity of the same the intendition of the man, in the most with the same philosophicism of the feelient of internations of a present and and colleng it is the sense the greath to the highers of the antolity as nature and the really in the spilitions the leaded and decome the has opence in the sume ------ temperature: 1.0 spilitions the leaded and decome the has opence in the sume the orded out powe higher mile as of coftere obe inbernation as to the fof ould mome evpladity. in no it granter, it is the than the say, but the most nothing which, the like the knre hindver" us setured effect of agard appate of alsoden" the lixe their men an its of losed the unistensshatity; and oppreness of this not which at the brindurely to giths of sayquitt guratuch with that this if and whu ------ temperature: 1.2 rely to giths of sayquitt guratuch with that this if and whungs thinkmani. ficcy, and peninecinated andur mage the sened in think wiwhhic to beyreasts than this gruath with thioruit catuen much. h. geevated in sporated mast the a"coid nrese mae, all conentry, .. fin perhuen venerly (whisty or spore lised har of but ic; at lebgre and things. it keod to pring ancayedy from dill a be utisti listousesquas oke the semment" (fim their falshin al up hesd, and u epoch 2 Epoch 1/1 200278/200278 [==============================] - 125s - loss: 1.6382 --- Generating with seed: "he cleverness of christianity.=--it is a master stroke of ch" ------ temperature: 0.2 he cleverness of christianity.=--it is a master stroke of chreme and the same and the contrary and conscience of the deason of the sense and that a sould and superion of the all the subjections and all the disting and all the more and the disting and all the same and such an all the delief to the same and more and sand and sense and all the more and the still and the sense and the more and contrary and man and such a sould and art and the presention of the ------ temperature: 0.5 y and man and such a sould and art and the presention of the daction of the still of the same is any more and sanders of hoors of who has an all the man is been fact and belief and contrary had sake and disting world so sake from the prejudice of the sentiment and the contrarism of vided and all the saymits of the man way not the achated the deadity at the "courde of sisted and all the disanctions and as a contrades in a should for a phadoward and only and ------ temperature: 1.0 and as a contrades in a should for a phadoward and only and emptoces of anmoved and the issintions eedit modeyners bre- warlt of being whole has been bit and would be thing as all it as mankfrom for is resp"quent, privelym yeads overthtice from how will has a mankinduled opine sancels and ary are but the moderation along atolity. 131. new may intempt a van the saur. trater, sake--it tantian all ass are a superstion truth, "worldting and lawtyes to make l ------ temperature: 1.2 ass are a superstion truth, "worldting and lawtyes to make life coldurcly of no has grocbity of norratrimer. no weat doem not ques to thus rasg, whation. od"y polent and rulobioved agrigncary us queciest? 41 uspotive force as unischolondanden of cratids, the unbanted caarlo soke not are re. to the trainit ene kinkly skants that self consatiof,", preveplle reasol decistuticaso itly vail. 8que"se of a every a progor veist a not caul. rigerary nature, in epoch 3 Epoch 1/1 200278/200278 [==============================] - 124s - loss: 1.5460 --- Generating with seed: "ch knows how to handle the knife surely and deftly, even whe" ------ temperature: 0.2 ch knows how to handle the knife surely and deftly, even when they has and the strength of the command the great and the sense of the great they are of the streng to the strength and the strength of the great former the strength and the strong the condition of the command they have to the strength and the profound the free spirit of the world in a more of the world and present in the compained they have been the sense of the command they have to the streng ------ temperature: 0.5 y have been the sense of the command they have to the strength concernous of the power, the have begon of the last of the profound the artists discourse in the becomes sense of the stand and concertic of texplence of the to may not a seep of the into the accuations that they heart as a solitude, in the good into the accistors, to when the have they has a stard in the last they seems they are of the consequently with the ender, and good in such a power of t ------ temperature: 1.0 e consequently with the ender, and good in such a power of the "firmat chores forgubmentatic in stand-new of a needs above than repersibily into the provivent stand" more what operiority courhe when endure really save sope ford of lower, and long of have, are sins and keet by courd. he should in the bodiec they noblephics," imported. so perhaps europe. , sechosics of the endiitagy, fougked any stranger of the corrorato it be last once or consequently no ------ temperature: 1.2 stranger of the corrorato it be last once or consequently not! in of access is once appearal stemporic,"--he the garwand any zer-oo -- drinequable to other one much lilutage and cumrest of the one, it not =the bas of trachtade of cowlutaf of whathout such with spount eronry are; gow a whick of a sole phvioration:whicitylyi power, in high has a conp, coming, he plession his hey!" unnects, iy every nevershs to adrataes family have insten, os ne's epoch 4 Epoch 1/1 200278/200278 [==============================] - 125s - loss: 1.4973 --- Generating with seed: "to have spoken of the sensus allegoricus of religion. he wou" ------ temperature: 0.2 to have spoken of the sensus allegoricus of religion. he would be the proposition of the subjection of the standing to such the subjection of the subjltition of the stands and the really the power of the spirit and concertion of the contrary of the concertion of the subjection to the subjection of the spirit of the subjection of the subjection of the subjection of the subjection of the contrary of the same and the subjection of the subjection of the stands ------ temperature: 0.5 the same and the subjection of the subjection of the stands of the more beartles and power of the pleasure of moral light, who is the must are an every disting of the deliebly desire the spirit in the subjection of men of distress in the single, to the strange to really been a mettful our uncertainting the expect and the stands of the expochish, exhection of the truth and the merely, and the doctior and enory and the pation of the thought and for a feat o ------ temperature: 1.0 ior and enory and the pation of the thought and for a feat offues toned spievement and common as musics of danger. that "the ordered-wants and lack of world of lettife--in any or nehin too "misundifow hundrary not incligation, dight, however, to moranary and life these motilet reculonac, to aritic means his sarkic. times, his tanvary him, it is their day happiness, in hare, of tood whings belief that eary when 1( the dinging it world induction in their for ------ temperature: 1.2 hat eary when 1( the dinging it world induction in their for artran, rspumous, ald redical pleniscion ap no revereiblines, tho lacquiring that fegais oracus--is preyer. the pery measime, as firnom and rack. -purss love to they like relight of reoning cage of signtories, the timu to coursite; that libenes afverbtersal; all catured, ehhic: when all tumple, heartted a inhting in away love the puten party al mistray. i jesess. own can clatorify seloperati", wh epoch 5 Epoch 1/1 200278/200278 [==============================] - 125s - loss: 1.4682 --- Generating with seed: "ion (werthschätzung)--the dislocation, distortion and the ap" ------ temperature: 0.2 ion (werthschätzung)--the dislocation, distortion and the appearation of his sensition and conscience of the distrusting the far the sensition of the individually the suffering the sense of the presentiments of the sense of the suffering and suffering the stronger of the suffering and the consequently the sense of the subject of the sense of the moral the sense of the desire the sense of the self--and the sensition of the suffering the sensition of the sen ------ temperature: 0.5 -and the sensition of the suffering the sensition of the sensition of the individual hence all the perceived as an existence of a few to who is new spirits of himself which may be the world ground our democration in every undifferent of the purely the far much of the estimate religions of the strong and sense of the other reality and conscience and the self-sure he has gare in the self--and knows man and period with the spirit and consequently consequently ------ temperature: 1.0 man and period with the spirit and consequently consequently hast"" but every every matters (without mad their world who prodessions are weok they consciences of commutionally men) who in comtring. this she appaine, without have under which ialations from o srud nothing in the metively to ding tender, in any hens in all very another purithe the complactions--how varies in the exrepration world and though the ethicangling; there is everything our comliferac ------ temperature: 1.2 though the ethicangling; there is everything our comliferacled ourianceince the long---r=nony much of anyome. if they lanifuels enally inepinious of may, the commin's for concern, there are has dmarding" to actable,ly effet will itower, butiness the condinided"--rings up they will futher miands, incondations? gear of limitny, conlict of hervedozihare and the intosting perious into comediand, setakest perficiated and inlital self--nage peruody; there is sp epoch 6 Epoch 1/1 200278/200278 [==============================] - 125s - loss: 1.4466 --- Generating with seed: "rd, which always flies further aloft in order always to see " ------ temperature: 0.2 rd, which always flies further aloft in order always to see the suffering that the suffering the sense of the strengthes of the suffering that it is a more and the self-complication of the suffering the suffering and the subtle and self-compartion of the comparting the suffering of the suffering the most the suffering the suffering of the compartion of the most present and the strength and the sufferings of the most the sense of the suffering the sense of ------ temperature: 0.5 ferings of the most the sense of the suffering the sense of the expect of the intellieate strengent of the dit the attaint is a soul one of the hond to the heart the most expect of the religious the sense of the histle of the fear of the same individual in such a most interest of the had to so the immorality of the possess of the allow, the compress is entitul condition, the discountering in the more reveale, and the refined the fear it is betered one to s ------ temperature: 1.0 ore reveale, and the refined the fear it is betered one to self-contindaning hypition of surdinguates the possible ataint, when he must beakes comple in the grody of the opposite oftent tog, pain finds one that templily to the truthdly one of the fasting oby the highest present treative must materies of incase varies in a cain, when seaced in seasoury, or such them of earlily, and so its as of the will to their to forms too scienticiel and for which it hea ------ temperature: 1.2 will to their to forms too scienticiel and for which it headds maid, estavelhing question, for thuer, requite tomlan"! what its do touthodly, thereby). theurse out who juveangh of tly histomiaraeg, in peinds. on it. all bemond mimal. the more harr acqueire it, he house, at of accouncing patedpance han" willly the ellara "formy tellate. medish purman tturfil an attruth been the custrestiblries in themen-and lightly again ih a daawas or its learhting than c epoch 7 Epoch 1/1 200278/200278 [==============================] - 125s - loss: 1.4282 --- Generating with seed: "realize how characteristic is this fear of the "man" in the " ------ temperature: 0.2 realize how characteristic is this fear of the "man" in the spirit and and the superition of the propert and perhaps be the superition of the superition of the same of the same the spirit and in the same the strong the still contrast and and and the sure an end and the strong to the destand that the standard, and the spirit and the superition of the superition of the strong the strong that the superition and the state of the same the spirit and and be the ------ temperature: 0.5 erition and the state of the same the spirit and and be the same said the spirit to the state and admired to rechancient man as a self felt that the religious distinguished the human believe that the deception, in soul, the stands had been man to be has striced be actual perhaps in all the interpretical strong the decontaitsnentine, the philosophy happiness of the greatest formerly be for the fact deep and weaker of an involuntarian man is one has to the c ------ temperature: 1.0 deep and weaker of an involuntarian man is one has to the carely community: ourselves as it seem with theme in hami dance alto manifesty, mansike of which that thereby religion, and reason, a litely for of the allarded by pogures, such diviniatifings and disentached, with life of suffernes, this , altherage. 1afetuenally that this tooking to plong tematic thate and surfoundaas: the progreable and untisy; which dhes mifere the all such a philosophers, a ------ temperature: 1.2 and untisy; which dhes mifere the all such a philosophers, as the athained the such living upon serposed if, his injuring, "the most standhfulness. no the dalb(basise, equal di butz if. thereby mast wast had to plangubly overman hat our eitrieious tar and hearth--a -of womet far imminalk and "she of castuoled.--in the oalt. ant ollicatiom prot behing-ma formuln unkercite--and probachte-patial the historled qualizsss section unterman contlict of, bein epoch 8 Epoch 1/1 200278/200278 [==============================] - 125s - loss: 1.4150 --- Generating with seed: "ith religion itself and regarded as the supreme attainment o" ------ temperature: 0.2 ith religion itself and regarded as the supreme attainment of the words to the scientifical strength and such an and and instincts and and the profoundly to the senses the subjection of the subtle and be desired and still be way and the same interpretation of the way, and the self-destines in the subjection of the desire and and such an experience of the same and be a still be subjection, the spirit is and all the surposition of the same and the subjection ------ temperature: 0.5 it is and all the surposition of the same and the subjection and the poind to the profound the same obscure of good, a spirit of an extent so from the greates the similated to himself with the place of spirit was to whenever the masters" of the experience, that is an extent or their spyous and need, and the experience and past by its the higher the schopenhauer's with an abstration and the purposed to understand that it is destined and destiny of himself, ------ temperature: 1.0 d to understand that it is destined and destiny of himself, fur feshicutawas terding itswhas ourselves which an " intain segret shise them? this opposing for ourselvesl. and as life-doatts? with and light, e spirit, he oppisest, one be does not as the differnes. 18
MIT
42-text-generation-with-lstm.ipynb
mhsattarian/class.vision
Swarm intelligence agentLast checked score: 1062.9
def swarm(obs, conf): def send_scout_carrier(x, y): """ send scout carrier to explore current cell and, if possible, cell above """ points = send_scouts(x, y) # if cell above exists if y > 0: cell_above_points = send_scouts(x, y - 1) # cell above points have lower priority if points < m1 and points < (cell_above_points - 1): # current cell's points will be negative points -= cell_above_points return points def send_scouts(x, y): """ send scouts to get points from all axes of the cell """ axes = explore_axes(x, y) points = combine_points(axes) return points def explore_axes(x, y): """ find points, marks, zeros and amount of in_air cells of all axes of the cell, "NE" = North-East etc. """ return { "NE -> SW": [ explore_direction(x, lambda z : z + 1, y, lambda z : z - 1), explore_direction(x, lambda z : z - 1, y, lambda z : z + 1) ], "E -> W": [ explore_direction(x, lambda z : z + 1, y, lambda z : z), explore_direction(x, lambda z : z - 1, y, lambda z : z) ], "SE -> NW": [ explore_direction(x, lambda z : z + 1, y, lambda z : z + 1), explore_direction(x, lambda z : z - 1, y, lambda z : z - 1) ], "S -> N": [ explore_direction(x, lambda z : z, y, lambda z : z + 1), explore_direction(x, lambda z : z, y, lambda z : z - 1) ] } def explore_direction(x, x_fun, y, y_fun): """ get points, mark, zeros and amount of in_air cells of this direction """ # consider only opponents mark mark = 0 points = 0 zeros = 0 in_air = 0 for i in range(one_mark_to_win): x = x_fun(x) y = y_fun(y) # if board[x][y] is inside board's borders if y >= 0 and y < conf.rows and x >= 0 and x < conf.columns: # mark of the direction will be the mark of the first non-empty cell if mark == 0 and board[x][y] != 0: mark = board[x][y] # if board[x][y] is empty if board[x][y] == 0: zeros += 1 if (y + 1) < conf.rows and board[x][y + 1] == 0: in_air += 1 elif board[x][y] == mark: points += 1 # stop searching for marks in this direction else: break return { "mark": mark, "points": points, "zeros": zeros, "in_air": in_air } def combine_points(axes): """ combine points of different axes """ points = 0 # loop through all axes for axis in axes: # if mark in both directions of the axis is the same # or mark is zero in one or both directions of the axis if (axes[axis][0]["mark"] == axes[axis][1]["mark"] or axes[axis][0]["mark"] == 0 or axes[axis][1]["mark"] == 0): # combine points of the same axis points += evaluate_amount_of_points( axes[axis][0]["points"] + axes[axis][1]["points"], axes[axis][0]["zeros"] + axes[axis][1]["zeros"], axes[axis][0]["in_air"] + axes[axis][1]["in_air"], m1, m2, axes[axis][0]["mark"] ) else: # if marks in directions of the axis are different and none of those marks is 0 for direction in axes[axis]: points += evaluate_amount_of_points( direction["points"], direction["zeros"], direction["in_air"], m1, m2, direction["mark"] ) return points def evaluate_amount_of_points(points, zeros, in_air, m1, m2, mark): """ evaluate amount of points in one direction or entire axis """ # if points + zeros in one direction or entire axis >= one_mark_to_win # multiply amount of points by one of the multipliers or keep amount of points as it is if (points + zeros) >= one_mark_to_win: if points >= one_mark_to_win: points *= m1 elif points == two_marks_to_win: points = points * m2 + zeros - in_air else: points = points + zeros - in_air else: points = 0 return points ################################################################################# # one_mark_to_win points multiplier m1 = 100 # two_marks_to_win points multiplier m2 = 10 # define swarm's mark swarm_mark = obs.mark # define opponent's mark opp_mark = 2 if swarm_mark == 1 else 1 # define one mark to victory one_mark_to_win = conf.inarow - 1 # define two marks to victory two_marks_to_win = conf.inarow - 2 # define board as two dimensional array board = [] for column in range(conf.columns): board.append([]) for row in range(conf.rows): board[column].append(obs.board[conf.columns * row + column]) # define board center board_center = conf.columns // 2 # start searching for the_column from board center x = board_center # shift to left/right from board center shift = 0 # THE COLUMN !!! the_column = { "x": x, "points": float("-inf") } # searching for the_column while x >= 0 and x < conf.columns: # find first empty cell starting from bottom of the column y = conf.rows - 1 while y >= 0 and board[x][y] != 0: y -= 1 # if column is not full if y >= 0: # send scout carrier to get points points = send_scout_carrier(x, y) # evaluate which column is THE COLUMN !!! if points > the_column["points"]: the_column["x"] = x the_column["points"] = points # shift x to right or left from swarm center shift *= -1 if shift >= 0: shift += 1 x = board_center + shift # Swarm's final decision :) return the_column["x"]
_____no_output_____
MIT
Swarm Intelligence Bot/Swarm Intelligence agent.ipynb
aanshul22/ConnectX_bot-Kaggle
Converting the agent into a python file so that it can be submitted
import inspect import os def write_agent_to_file(function, file): with open(file, "a" if os.path.exists(file) else "w") as f: f.write(inspect.getsource(function)) print(function, "written to", file) write_agent_to_file(swarm, os.getcwd() + "\\submission.py")
<function swarm at 0x0000024ACFC97E58> written to D:\Notebooks\submission.py
MIT
Swarm Intelligence Bot/Swarm Intelligence agent.ipynb
aanshul22/ConnectX_bot-Kaggle
Module dependency installation functions The function moduleExists accepts a regexmoduleExists(r"minio.*")
import re import pkg_resources import sys def moduleExists (moduleFilter: str) -> bool: installed_packages = pkg_resources.working_set installed_packages_list = sorted(["%s==%s" % (i.key, i.version) for i in installed_packages]) installed_packages_list = list(filter(lambda str: re.match(moduleFilter, str), installed_packages_list)) if installed_packages_list and len(installed_packages_list) > 0: print("Modules found.") print(installed_packages_list) return True return False
_____no_output_____
Apache-2.0
Libs/ModulesManagement.ipynb
vertechcon/jupyter
The function moduleExists takes both the moduleName and regexmoduleExists("minio", r"minio.*")
def ensureInstalled (moduleName: str, flt: str): if not moduleExists(flt): !{sys.executable} -m pip install {moduleName} print("Module installed.") else: print("Module already installed.") def ensureInstalled_noDeps (moduleName: str, flt: str): if not moduleExists(flt): !{sys.executable} -m pip install {moduleName} --no-deps print("Module installed.") else: print("Module already installed.") #Tests #ensureInstalled("minio", r"minio.*")
Modules found. ['minio==5.0.10'] Module already installed.
Apache-2.0
Libs/ModulesManagement.ipynb
vertechcon/jupyter
Getting to know LSTMs betterCreated: September 13, 2018 Author: Thamme Gowda Goals:- To get batches of *unequal length sequences* encoded correctly!- Know how the hidden states flow between encoders and decoders- Know how the multiple stacked LSTM layers pass hidden statesExample: a simple bi-directional LSTM which takes 3d input vectorsand produces 2d output vectors.
import torch from torch import nn lstm = nn.LSTM(3, 2, batch_first=True, bidirectional=True) # Lets create a batch input. # 3 sequences in batch (the first dim) , see batch_first=True # Then the logest sequence is 4 time steps, ==> second dimension # Each time step has 3d vector which is input ==> last dimension pad_seq = torch.rand(3, 4, 3) # That is nice for the theory # but in practice we are dealing with un equal length sequences # among those 3 sequences in the batch, lets us say # first sequence is the longest, with 4 time steps --> no padding needed # second seq is 3 time steps --> pad the last time step pad_seq[1, 3, :] = 0.0 # third seq is 2 time steps --> pad the last two steps pad_seq[2, 2:, :] = 0.0 print("Padded Input:") print(pad_seq) # so we got these lengths lens = [4,3,2] print("Sequence Lenghts: ", lens) # lets send padded seq to LSTM out,(h_t, c_t) = lstm(pad_seq) print("All Outputs:") print(out)
All Outputs: tensor([[[ 0.0428, -0.3015, 0.0359, 0.0557], [ 0.0919, -0.4145, 0.0278, 0.0480], [ 0.0768, -0.4989, 0.0203, 0.0674], [ 0.1019, -0.4925, -0.0177, 0.0224]], [[ 0.0587, -0.3025, 0.0017, 0.0201], [ 0.0537, -0.3388, -0.0532, 0.0111], [ 0.0839, -0.3811, -0.0446, -0.0020], [ 0.0595, -0.3681, -0.0720, 0.0218]], [[ 0.0147, -0.2585, -0.0093, 0.0756], [ 0.0398, -0.3531, -0.0174, 0.0369], [ 0.0458, -0.3476, -0.0912, 0.0243], [ 0.0422, -0.3360, -0.0720, 0.0218]]], grad_fn=<TransposeBackward0>)
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
^^ Output is 2x2d=4d vector since it is bidirectional forward 2d, backward 2d are concatenated Total vectors=12: 3 seqs in batch x 4 time steps;; each vector is 4d > Hmm, what happened to my padding time steps? Will padded zeros mess with the internal weights of LSTM when I do backprop?---Lets look at the last Hidden state
print(h_t)
tensor([[[ 0.1019, -0.4925], [ 0.0595, -0.3681], [ 0.0422, -0.3360]], [[ 0.0359, 0.0557], [ 0.0017, 0.0201], [-0.0093, 0.0756]]], grad_fn=<ViewBackward>)
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
Last hidden state is a 2d (same as output) vectors, but 2 for each step because of bidirectional rnn There are 3 of them since there were three seqs in the batch each corresponding to the last step But the definition of *last time step* is bit tricky For the left-to-right LSTM, it is the last step of input For the right-to-left LSTM, it is the first step of input This makes sense now.--- Lets look at $c_t$:
print("Last c_t:") print(c_t)
Last c_t: tensor([[[ 0.3454, -1.0070], [ 0.1927, -0.6731], [ 0.1361, -0.6063]], [[ 0.1219, 0.1858], [ 0.0049, 0.0720], [-0.0336, 0.2787]]], grad_fn=<ViewBackward>)
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
This should be similar to the last hidden state. Question: > what happened to my padding time steps? Did the last hidden state exclude the padded time steps?I can see that last hidden state of the forward LSTM didnt distinguish padded zeros. Lets see output of each time steps and last hidden state of left-to-right LSTM, again. We know that the lengths (after removing padding) are \[4,3,2]
print("All time stamp outputs:") print(out[:, :, :2]) print("Last hidden state (forward LSTM):") print(h_t[0])
All time stamp outputs: tensor([[[ 0.0428, -0.3015], [ 0.0919, -0.4145], [ 0.0768, -0.4989], [ 0.1019, -0.4925]], [[ 0.0587, -0.3025], [ 0.0537, -0.3388], [ 0.0839, -0.3811], [ 0.0595, -0.3681]], [[ 0.0147, -0.2585], [ 0.0398, -0.3531], [ 0.0458, -0.3476], [ 0.0422, -0.3360]]], grad_fn=<SliceBackward>) Last hidden state (forward LSTM): tensor([[ 0.1019, -0.4925], [ 0.0595, -0.3681], [ 0.0422, -0.3360]], grad_fn=<SelectBackward>)
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
*Okay, Now I get it.* When building sequence to sequence (for Machine translation) I cant pass last hidden state like this to a decoder.We have to inform the LSTM about lengths.How? Thats why we have `torch.nn.utils.rnn.pack_padded_sequence`
print("Padded Seqs:") print(pad_seq) print("Lens:", lens) print("Pack Padded Seqs:") pac_pad_seq = torch.nn.utils.rnn.pack_padded_sequence(pad_seq, lens, batch_first=True) print(pac_pad_seq)
Padded Seqs: tensor([[[0.7850, 0.6658, 0.7522], [0.3855, 0.7981, 0.6199], [0.9081, 0.6357, 0.3619], [0.2481, 0.5198, 0.2635]], [[0.2654, 0.9904, 0.3050], [0.1671, 0.1709, 0.2392], [0.0705, 0.4811, 0.3636], [0.0000, 0.0000, 0.0000]], [[0.6474, 0.5172, 0.0308], [0.5782, 0.3083, 0.5117], [0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000]]]) Lens: [4, 3, 2] Pack Padded Seqs: PackedSequence(data=tensor([[0.7850, 0.6658, 0.7522], [0.2654, 0.9904, 0.3050], [0.6474, 0.5172, 0.0308], [0.3855, 0.7981, 0.6199], [0.1671, 0.1709, 0.2392], [0.5782, 0.3083, 0.5117], [0.9081, 0.6357, 0.3619], [0.0705, 0.4811, 0.3636], [0.2481, 0.5198, 0.2635]]), batch_sizes=tensor([3, 3, 2, 1]))
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
Okay, this is doing some magic -- getting rid of all padded zeros -- Cool!`batch_sizes=tensor([3, 3, 2, 1]` seems to be the main ingredient of this magic.`[3, 3, 2, 1]` I get it!We have 4 time steps in batch. - First two step has all 3 seqs in the batch. - third step is made of first 2 seqs in batch. - Fourth step is made of first seq in batchI now understand why the sequences in the batch has to be sorted by descending order of lengths!Now let us send it to LSTM and see what it produces
pac_pad_out, (pac_ht, pac_ct) = lstm(pac_pad_seq) # Lets first look at output. this is packed output print(pac_pad_out)
PackedSequence(data=tensor([[ 0.0428, -0.3015, 0.0359, 0.0557], [ 0.0587, -0.3025, 0.0026, 0.0203], [ 0.0147, -0.2585, -0.0057, 0.0754], [ 0.0919, -0.4145, 0.0278, 0.0480], [ 0.0537, -0.3388, -0.0491, 0.0110], [ 0.0398, -0.3531, -0.0005, 0.0337], [ 0.0768, -0.4989, 0.0203, 0.0674], [ 0.0839, -0.3811, -0.0262, -0.0056], [ 0.1019, -0.4925, -0.0177, 0.0224]], grad_fn=<CatBackward>), batch_sizes=tensor([3, 3, 2, 1]))
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
Okay this is packed output. Sequences are of unequal lengths.Now we need to restore the output by padding 0s for shorter sequences.
pad_out = nn.utils.rnn.pad_packed_sequence(pac_pad_out, batch_first=True, padding_value=0) print(pad_out)
(tensor([[[ 0.0428, -0.3015, 0.0359, 0.0557], [ 0.0919, -0.4145, 0.0278, 0.0480], [ 0.0768, -0.4989, 0.0203, 0.0674], [ 0.1019, -0.4925, -0.0177, 0.0224]], [[ 0.0587, -0.3025, 0.0026, 0.0203], [ 0.0537, -0.3388, -0.0491, 0.0110], [ 0.0839, -0.3811, -0.0262, -0.0056], [ 0.0000, 0.0000, 0.0000, 0.0000]], [[ 0.0147, -0.2585, -0.0057, 0.0754], [ 0.0398, -0.3531, -0.0005, 0.0337], [ 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000]]], grad_fn=<TransposeBackward0>), tensor([4, 3, 2]))
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
Output looks good! Now Let us look at the hidden state.
print(pac_ht)
tensor([[[ 0.1019, -0.4925], [ 0.0839, -0.3811], [ 0.0398, -0.3531]], [[ 0.0359, 0.0557], [ 0.0026, 0.0203], [-0.0057, 0.0754]]], grad_fn=<ViewBackward>)
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
This is great. As we see the forward (or Left-to-right) LSTM's last hidden state is proper as per the lengths. So should be the c_t.Let us concatenate forward and reverse LSTM's hidden states
torch.cat([pac_ht[0],pac_ht[1]], dim=1)
_____no_output_____
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
---- Multi Layer LSTMLet us redo the above hacking to understand how 2 layer LSTM works
n_layers = 2 inp_size = 3 out_size = 2 lstm2 = nn.LSTM(inp_size, out_size, num_layers=n_layers, batch_first=True, bidirectional=True) pac_out, (h_n, c_n) = lstm2(pac_pad_seq) print("Packed Output:") print(pac_out) pad_out = nn.utils.rnn.pad_packed_sequence(pac_out, batch_first=True, padding_value=0) print("Pad Output:") print(pad_out) print("Last h_n:") print(h_n) print("Last c_n:") print(c_n)
Packed Output: PackedSequence(data=tensor([[ 0.2443, 0.0703, -0.0871, -0.0664], [ 0.2496, 0.0677, -0.0658, -0.0605], [ 0.2419, 0.0687, -0.0701, -0.0521], [ 0.3354, 0.0964, -0.0772, -0.0613], [ 0.3272, 0.0975, -0.0655, -0.0534], [ 0.3216, 0.1055, -0.0504, -0.0353], [ 0.3644, 0.1065, -0.0752, -0.0531], [ 0.3583, 0.1116, -0.0418, -0.0350], [ 0.3760, 0.1139, -0.0438, -0.0351]], grad_fn=<CatBackward>), batch_sizes=tensor([3, 3, 2, 1])) Pad Output: (tensor([[[ 0.2443, 0.0703, -0.0871, -0.0664], [ 0.3354, 0.0964, -0.0772, -0.0613], [ 0.3644, 0.1065, -0.0752, -0.0531], [ 0.3760, 0.1139, -0.0438, -0.0351]], [[ 0.2496, 0.0677, -0.0658, -0.0605], [ 0.3272, 0.0975, -0.0655, -0.0534], [ 0.3583, 0.1116, -0.0418, -0.0350], [ 0.0000, 0.0000, 0.0000, 0.0000]], [[ 0.2419, 0.0687, -0.0701, -0.0521], [ 0.3216, 0.1055, -0.0504, -0.0353], [ 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000]]], grad_fn=<TransposeBackward0>), tensor([4, 3, 2])) Last h_n: tensor([[[ 0.2190, 0.2067], [ 0.1868, 0.2188], [ 0.1706, 0.2347]], [[-0.5062, 0.1701], [-0.4130, 0.2190], [-0.4228, 0.1733]], [[ 0.3760, 0.1139], [ 0.3583, 0.1116], [ 0.3216, 0.1055]], [[-0.0871, -0.0664], [-0.0658, -0.0605], [-0.0701, -0.0521]]], grad_fn=<ViewBackward>) Last c_n: tensor([[[ 0.5656, 0.3145], [ 0.4853, 0.3633], [ 0.4255, 0.3718]], [[-0.9779, 0.6461], [-0.8578, 0.7013], [-0.6978, 0.5322]], [[ 1.0754, 0.4258], [ 1.0021, 0.4184], [ 0.8623, 0.3839]], [[-0.1535, -0.2073], [-0.1187, -0.1912], [-0.1211, -0.1589]]], grad_fn=<ViewBackward>)
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
The LSTM output looks similar to single layer LSTM.However the ht and ct states are bigger -- since there are two layers. Now its time to RTFM. > h_n of shape `(num_layers * num_directions, batch, hidden_size)`: tensor containing the hidden state for `t = seq_len`.Like output, the layers can be separated using `h_n.view(num_layers, num_directions, batch, hidden_size)` and similarly for c_n.
batch_size = 3 num_dirs = 2 l_n_h_n = h_n.view(n_layers, num_dirs, batch_size, out_size)[-1] # last layer last time step hidden state print(l_n_h_n) last_hid = torch.cat([l_n_h_n[0], l_n_h_n[1]], dim=1) print("last layer last time stamp hidden state") print(last_hid) print("Padded Outputs :") print(pad_out)
last layer last time stamp hidden state tensor([[ 0.3760, 0.1139, -0.0871, -0.0664], [ 0.3583, 0.1116, -0.0658, -0.0605], [ 0.3216, 0.1055, -0.0701, -0.0521]], grad_fn=<CatBackward>) Padded Outputs : (tensor([[[ 0.2443, 0.0703, -0.0871, -0.0664], [ 0.3354, 0.0964, -0.0772, -0.0613], [ 0.3644, 0.1065, -0.0752, -0.0531], [ 0.3760, 0.1139, -0.0438, -0.0351]], [[ 0.2496, 0.0677, -0.0658, -0.0605], [ 0.3272, 0.0975, -0.0655, -0.0534], [ 0.3583, 0.1116, -0.0418, -0.0350], [ 0.0000, 0.0000, 0.0000, 0.0000]], [[ 0.2419, 0.0687, -0.0701, -0.0521], [ 0.3216, 0.1055, -0.0504, -0.0353], [ 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000]]], grad_fn=<TransposeBackward0>), tensor([4, 3, 2]))
Apache-2.0
notes/know-lstms-better.ipynb
pegasus-lynx/rtg
Differential Privacy - Simple Database Queries The database is going to be a VERY simple database with only one boolean column. Each row corresponds to a person. Each value corresponds to whether or not that person has a certain private attribute (such as whether they have a certain disease, or whether they are above/below a certain age). We are then going to learn how to know whether a database query over such a small database is differentially private or not - and more importantly - what techniques we can employ to ensure various levels of privacy Create a Simple DatabaseTo do this, initialize a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database.
import torch # the number of entries in our DB / this of it as number of people in the DB num_entries = 5000 db = torch.rand(num_entries) > 0.5 db
_____no_output_____
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
Generate Parallel Databases> "When querying a database, if I removed someone from the database, would the output of the query change?". In order to check for this, we create "parallel databases" which are simply databases with one entry removed. We'll create a list of every parallel database to the one currently contained in the "db" variable. Then, create a helper function which does the following:- creates the initial database (db)- creates all parallel databases
def create_parallel_db(db, remove_index): return torch.cat((db[0:remove_index], db[remove_index+1:])) def create_parallel_dbs(db): parallel_dbs = list() for i in range(len(db)): pdb = create_parallel_db(db, i) parallel_dbs.append(pdb) return parallel_dbs def create_db_and_parallels(num_entries): # generate dbs and parallel dbs on the fly db = torch.rand(num_entries) > 0.5 pdbs = create_parallel_dbs(db) return db, pdbs db, pdbs = create_db_and_parallels(10) pdbs print("Real database:", db) print("Size of real DB", db.size()) print("A sample parallel DB", pdbs[0]) print("Size of parallel DB", pdbs[0].size())
Real database: tensor([1, 1, 1, 0, 0, 0, 0, 0, 0, 0], dtype=torch.uint8) Size of real DB torch.Size([10]) A sample parallel DB tensor([1, 1, 0, 0, 0, 0, 0, 0, 0], dtype=torch.uint8) Size of parallel DB torch.Size([9])
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
Towards Evaluating The Differential Privacy of a FunctionIntuitively, we want to be able to query our database and evaluate whether or not the result of the query is leaking "private" information. > This is about evaluating whether the output of a query changes when we remove someone from the database. Specifically, we want to evaluate the *maximum* amount the query changes when someone is removed (maximum over all possible people who could be removed). To find how much privacy is leaked, we'll iterate over each person in the database and **measure** the difference in the output of the query relative to when we query the entire database. Just for the sake of argument, let's make our first "database query" a simple sum. Aka, we're going to count the number of 1s in the database.
db, pdbs = create_db_and_parallels(200) def query(db): return db.sum() query(db) # the output of the parallel dbs is different from the db query query(pdbs[1]) full_db_result = query(db) print(full_db_result) sensitivity = 0 sensitivity_scale = [] for pdb in pdbs: pdb_result = query(pdb) db_distance = torch.abs(pdb_result - full_db_result) if(db_distance > sensitivity): sensitivity_scale.append(db_distance) sensitivity = db_distance sensitivity
_____no_output_____
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
Sensitivity> The maximum amount the query changes when removing an individual from the DB. Evaluating the Privacy of a FunctionThe difference between each parallel db's query result and the query result for the real database and its max value (which was 1) is called "sensitivity". It corresponds to the function we chose for the query. The "sum" query will always have a sensitivity of exactly 1. We can also calculate sensitivity for other functions as well.Let's calculate sensitivity for the "mean" function.
def sensitivity(query, num_entries=1000): db, pdbs = create_db_and_parallels(num_entries) full_db_result = query(db) max_distance = 0 for pdb in pdbs: # for each parallel db, execute the query (sum, or mean, ..., etc) pdb_result = query(pdb) db_distance = torch.abs(pdb_result - full_db_result) if (db_distance > max_distance): max_distance = db_distance return max_distance # our query is now the mean def query(db): return db.float().mean() sensitivity(query)
_____no_output_____
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
Wow! That sensitivity is WAY lower. Note the intuition here. >"Sensitivity" is measuring how sensitive the output of the query is to a person being removed from the database. For a simple sum, this is always 1, but for the mean, removing a person is going to change the result of the query by rougly 1 divided by the size of the database. Thus, "mean" is a VASTLY less "sensitive" function (query) than SUM. Calculating L1 Sensitivity For ThresholdTO calculate the sensitivty for the "threshold" function: - First compute the sum over the database (i.e. sum(db)) and return whether that sum is greater than a certain threshold.- Then, create databases of size 10 and threshold of 5 and calculate the sensitivity of the function. - Finally, re-initialize the database 10 times and calculate the sensitivity each time.
def query(db, threshold=5): """ Query that adds a threshold of 5, and returns whether sum is > threshold or not. """ return (db.sum() > threshold).float() for i in range(10): sens = sensitivity(query, num_entries=10) print(sens)
0 tensor(1.) 0 0 0 0 0 0 0 tensor(1.)
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
A Basic Differencing AttackSadly none of the functions we've looked at so far are differentially private (despite them having varying levels of sensitivity). The most basic type of attack can be done as follows.Let's say we wanted to figure out a specific person's value in the database. All we would have to do is query for the sum of the entire database and then the sum of the entire database without that person! Performing a Differencing Attack on Row 10 (How privacy can fail)We'll construct a database and then demonstrate how one can use two different sum queries to explose the value of the person represented by row 10 in the database (note, you'll need to use a database with at least 10 rows)
db, _ = create_db_and_parallels(100) db # create a parallel db with that person (index 10) removed pdb = create_parallel_db(db, remove_index=10) pdb # differencing attack using sum query sum(db) - sum(pdb) # a differencing attack using mean query sum(db).float() /len(db) - sum(pdb).float() / len(pdb) # differencing using a threshold (sum(db).float() > 50) - (sum(pdb).float() > 50)
_____no_output_____
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
Local Differential PrivacyDifferential privacy always requires a form of randommess or noise added to the query to protect from things like Differencing Attacks.To explain this, let's look at Randomized Response. Randomized Response (Local Differential Privacy)Let's say I have a group of people I wish to survey about a very taboo behavior which I think they will lie about (say, I want to know if they have ever committed a certain kind of crime). I'm not a policeman, I'm just trying to collect statistics to understand the higher level trend in society. So, how do we do this? One technique is to add randomness to each person's response by giving each person the following instructions (assuming I'm asking a simple yes/no question):- Flip a coin 2 times.- If the first coin flip is heads, answer honestly- If the first coin flip is tails, answer according to the second coin flip (heads for yes, tails for no)!Thus, each person is now protected with "plausible deniability". If they answer "Yes" to the question "have you committed X crime?", then it might becasue they actually did, or it might be because they are answering according to a random coin flip. Each person has a high degree of protection. Furthermore, we can recover the underlying statistics with some accuracy, as the "true statistics" are simply averaged with a 50% probability. Thus, if we collect a bunch of samples and it turns out that 60% of people answer yes, then we know that the TRUE distribution is actually centered around 70%, because 70% averaged with a 50% (a coin flip) is 60% which is the result we obtained. However, it should be noted that, especially when we only have a few samples, this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. > NOTE: **The greater the privacy protection (plausible deniability) the less accurate the results. **Let's implement this local DP for our database before!The main goal is to: * Get the most accurate query with the **greatest** amount of privacy* Greatest fit with trust models in the actual world, (don't waste trust)Let's implement local differential privacy:
db, pdbs = create_db_and_parallels(100) db def query(db): true_result = torch.mean(db.float()) # local differential privacy is adding noise to data: replacing some # of the values with random values first_coin_flip = (torch.rand(len(db)) > 0.5).float() second_coin_flip = (torch.rand(len(db)) > 0.5).float() # differentially private DB ... augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip # the result is skewed if we do: # torch.mean(augmented_db.float()) # we remove the skewed average that was the result of the differential privacy dp_result = torch.mean(augmented_db.float()) * 2 - 0.5 return dp_result, true_result db, pdbs = create_db_and_parallels(10) private_result, true_result = query(db) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset db, pdbs = create_db_and_parallels(100) private_result, true_result = query(db) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset even further db, pdbs = create_db_and_parallels(1000) private_result, true_result = query(db) print(f"Without noise {private_result}") print(f"With noise: {true_result}")
Without noise 0.5099999904632568 With noise: 0.5210000276565552
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
As we have seen,> The more data we have the more the noise will tend to not affect the output of the query Varying Amounts of NoiseWe are going to augment the randomized response query to allow for varying amounts of randomness to be added. To do this, we bias the coin flip to be higher or lower and then run the same experiment. We'll need to both adjust the likelihood of the first coin flip AND the de-skewing at the end (where we create the "augmented_result" variable).
# Noise < 0.5 sets the likelihood that the coin flip will be heads, and vice-versa. noise = 0.2 true_result = torch.mean(db.float()) # let's add the noise to data: replacing some of the values with random values first_coin_flip = (torch.rand(len(db)) > noise).float() second_coin_flip = (torch.rand(len(db)) > 0.5).float() # differentially private DB ... augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip # since the result will be skewed if we do: torch.mean(augmented_db.float()) # we'll remove the skewed average above by doing below: dp_result = torch.mean(augmented_db.float()) * 2 - 0.5 sk_result = augmented_db.float().mean() print('True result:', true_result) print('Skewed result:', sk_result) print('De-skewed result:', dp_result) def query(db, noise=0.2): """Default noise(0.2) above sets the likelihood that the coin flip will be heads""" true_result = torch.mean(db.float()) # local diff privacy is adding noise to data: replacing some # of the values with random values first_coin_flip = (torch.rand(len(db)) > noise).float() second_coin_flip = (torch.rand(len(db)) > 0.5).float() # differentially private DB ... augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip # the result is skewed if we do: # torch.mean(augmented_db.float()) # we remove the skewed average that was the result of the differential privacy sk_result = augmented_db.float().mean() private_result = ((sk_result / noise ) - 0.5) * noise / (1 - noise) return private_result, true_result # test varying noise db, pdbs = create_db_and_parallels(10) private_result, true_result = query(db, noise=0.2) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset even further db, pdbs = create_db_and_parallels(100) private_result, true_result = query(db, noise=0.4) print(f"Without noise {private_result}") print(f"With noise: {true_result}") # Increasing the size of the dateset even further db, pdbs = create_db_and_parallels(10000) private_result, true_result = query(db, noise=0.8) print(f"Without noise {private_result}") print(f"With noise: {true_result}")
Without noise 0.5264999866485596 With noise: 0.5004000067710876
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
From the analysis above, with more data, its easier to protect privacy with noise. It becomes a lot easier to learn about general characteristics in the DB because the algorithm has more data points to look at and compare with each other. So differential privacy mechanisms has helped us filter out any information unique to individual data entities and try to let through information that is consistent across multiple different people in the dataset. > The larger the dataset, the easier it is to protect privacy. The Formal Definition of Differential PrivacyThe previous method of adding noise was called "Local Differentail Privacy" because we added noise to each datapoint individually. This is necessary for some situations wherein the data is SO sensitive that individuals do not trust noise to be added later. However, it comes at a very high cost in terms of accuracy. However, alternatively we can add noise AFTER data has been aggregated by a function. This kind of noise can allow for similar levels of protection with a lower affect on accuracy. However, participants must be able to trust that no-one looked at their datapoints _before_ the aggregation took place. In some situations this works out well, in others (such as an individual hand-surveying a group of people), this is less realistic.Nevertheless, global differential privacy is incredibly important because it allows us to perform differential privacy on smaller groups of individuals with lower amounts of noise. Let's revisit our sum functions.
db, pdbs = create_db_and_parallels(100) def query(db): return torch.sum(db.float()) def M(db): query(db) + noise query(db)
_____no_output_____
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
So the idea here is that we want to add noise to the output of our function. We actually have two different kinds of noise we can add - Laplacian Noise or Gaussian Noise. However, before we do so at this point we need to dive into the formal definition of Differential Privacy.![alt text](dp_formula.png "Title") _Image From: "The Algorithmic Foundations of Differential Privacy" - Cynthia Dwork and Aaron Roth - https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf_ This definition does not _create_ differential privacy, instead it is a measure of how much privacy is afforded by a query M. Specifically, it's a comparison between running the query M on a database (x) and a parallel database (y). As you remember, parallel databases are defined to be the same as a full database (x) with one entry/person removed.Thus, this definition says that FOR ALL parallel databases, the maximum distance between a query on database (x) and the same query on database (y) will be e^epsilon, but that occasionally this constraint won't hold with probability delta. Thus, this theorem is called "epsilon delta" differential privacy. EpsilonLet's unpack the intuition of this for a moment. Epsilon Zero: If a query satisfied this inequality where epsilon was set to 0, then that would mean that the query for all parallel databases outputed the exact same value as the full database. As you may remember, when we calculated the "threshold" function, often the Sensitivity was 0. In that case, the epsilon also happened to be zero.Epsilon One: If a query satisfied this inequality with epsilon 1, then the maximum distance between all queries would be 1 - or more precisely - the maximum distance between the two random distributions M(x) and M(y) is 1 (because all these queries have some amount of randomness in them, just like we observed in the last section). DeltaDelta is basically the probability that epsilon breaks. Namely, sometimes the epsilon is different for some queries than it is for others. For example, you may remember when we were calculating the sensitivity of threshold, most of the time sensitivity was 0 but sometimes it was 1. Thus, we could calculate this as "epsilon zero but non-zero delta" which would say that epsilon is perfect except for some probability of the time when it's arbitrarily higher. Note that this expression doesn't represent the full tradeoff between epsilon and delta. How To Add Noise for Global Differential PrivacyGlobal Differential Privacy adds noise to the output of a query.We'll add noise to the output of our query so that it satisfies a certain epsilon-delta differential privacy threshold.There are two kinds of noise we can add - Gaussian Noise- Laplacian Noise. Generally speaking Laplacian is better, but both are still valid. Now to the hard question... How much noise should we add?The amount of noise necessary to add to the output of a query is a function of four things:- the type of noise (Gaussian/Laplacian)- the sensitivity of the query/function- the desired epsilon (ε)- the desired delta (δ)Thus, for each type of noise we're adding, we have different way of calculating how much to add as a function of sensitivity, epsilon, and delta.Laplacian noise is increased/decreased according to a "scale" parameter b. We choose "b" based on the following formula.`b = sensitivity(query) / epsilon`In other words, if we set b to be this value, then we know that we will have a privacy leakage of <= epsilon. Furthermore, the nice thing about Laplace is that it guarantees this with delta == 0. There are some tunings where we can have very low epsilon where delta is non-zero, but we'll ignore them for now. Querying Repeatedly- if we query the database multiple times - we can simply add the epsilons (Even if we change the amount of noise and their epsilons are not the same). Create a Differentially Private QueryLet's create a query function which sums over the database and adds just the right amount of noise such that it satisfies an epsilon constraint. query will be for "sum" and for "mean". We'll use the correct sensitivity measures for both.
epsilon = 0.001 import numpy as np db, pdbs = create_db_and_parallels(100) db def sum_query(db): return db.sum() def laplacian_mechanism(db, query, sensitivity): beta = sensitivity / epsilon noise = torch.tensor(np.random.laplace(0, beta, 1)) return query(db) + noise laplacian_mechanism(db, sum_query, 0.01) def mean_query(db): return torch.mean(db.float()) laplacian_mechanism(db, mean_query, 1)
_____no_output_____
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
Differential Privacy for Deep LearningSo what does all of this have to do with Deep Learning? Well, these mechanisms form the core primitives for how Differential Privacy provides guarantees in the context of Deep Learning. Perfect Privacy> "a query to a database returns the same value even if we remove any person from the database".In the context of Deep Learning, we have a similar standard. > Training a model on a dataset should return the same model even if we remove any person from the dataset.Thus, we've replaced "querying a database" with "training a model on a dataset". In essence, the training process is a kind of query. However, one should note that this adds two points of complexity which database queries did not have: 1. do we always know where "people" are referenced in the dataset? 2. neural models rarely never train to the same output model, even on identical dataThe answer to (1) is to treat each training example as a single, separate person. Strictly speaking, this is often overly zealous as some training examples have no relevance to people and others may have multiple/partial (consider an image with multiple people contained within it). Thus, localizing exactly where "people" are referenced, and thus how much your model would change if people were removed, is challenging.The answer to (2) is also an open problem. To solve this, lets look at PATE. Scenario: A Health Neural NetworkYou work for a hospital and you have a large collection of images about your patients. However, you don't know what's in them. You would like to use these images to develop a neural network which can automatically classify them, however since your images aren't labeled, they aren't sufficient to train a classifier.However, being a cunning strategist, you realize that you can reach out to 10 partner hospitals which have annotated data. It is your hope to train your new classifier on their datasets so that you can automatically label your own. While these hospitals are interested in helping, they have privacy concerns regarding information about their patients. Thus, you will use the following technique to train a classifier which protects the privacy of patients in the other hospitals.- 1) You'll ask each of the 10 hospitals to train a model on their own datasets (All of which have the same kinds of labels)- 2) You'll then use each of the 10 partner models to predict on your local dataset, generating 10 labels for each of your datapoints- 3) Then, for each local data point (now with 10 labels), you will perform a DP query to generate the final true label. This query is a "max" function, where "max" is the most frequent label across the 10 labels. We will need to add laplacian noise to make this Differentially Private to a certain epsilon/delta constraint.- 4) Finally, we will retrain a new model on our local dataset which now has labels. This will be our final "DP" model.So, let's walk through these steps. I will assume you're already familiar with how to train/predict a deep neural network, so we'll skip steps 1 and 2 and work with example data. We'll focus instead on step 3, namely how to perform the DP query for each example using toy data.So, let's say we have 10,000 training examples, and we've got 10 labels for each example (from our 10 "teacher models" which were trained directly on private data). Each label is chosen from a set of 10 possible labels (categories) for each image.
import numpy as np num_teachers = 10 # we're working with 10 partner hospitals num_examples = 10000 # the size of OUR dataset num_labels = 10 # number of lablels for our classifier # fake predictions fake_preds = ( np.random.rand( num_teachers, num_examples ) * num_labels).astype(int).transpose(1,0) fake_preds[:,0] # Step 3: Perform a DP query to generate the final true label/outputs, # Use the argmax function to find the most frequent label across all 10 labels, # Then finally add some noise to make it differentially private. new_labels = list() for an_image in fake_preds: # count the most frequent label the hospitals came up with label_counts = np.bincount(an_image, minlength=num_labels) epsilon = 0.1 beta = 1 / epsilon for i in range(len(label_counts)): # for each label, add some noise to the counts label_counts[i] += np.random.laplace(0, beta, 1) new_label = np.argmax(label_counts) new_labels.append(new_label) # new_labels new_labels[:10]
_____no_output_____
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
PATE Analysis
# lets say the hospitals came up with these outputs... 9, 9, 3, 6 ..., 2 labels = np.array([9, 9, 3, 6, 9, 9, 9, 9, 8, 2]) counts = np.bincount(labels, minlength=10) print(counts) query_result = np.argmax(counts) query_result
[0 0 1 1 0 0 1 0 1 6]
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
If every hospital says the result is 9, then we have very low sensitivity.We could remove a person, from the dataset, and the query results still is 9,then we have not leaked any information. Core assumption: The same patient was not present at any of this two hospitals.Removing any one of this hospitals, acts as a proxy to removing one person, which means that if we do remove one hospital, the query result should not be different.
from syft.frameworks.torch.differential_privacy import pate num_teachers, num_examples, num_labels = (100, 100, 10) # generate fake predictions/labels preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int) indices = (np.random.rand(num_examples) * num_labels).astype(int) # true answers preds[:,0:10] *= 0 # perform PATE to find the data depended epsilon and data independent epsilon data_dep_eps, data_ind_eps = pate.perform_analysis( teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5 ) print('Data Independent Epsilon', data_ind_eps) print('Data Dependent Epsilon', data_dep_eps) assert data_dep_eps < data_ind_eps data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5) print("Data Independent Epsilon:", data_ind_eps) print("Data Dependent Epsilon:", data_dep_eps) preds[:,0:50] *= 0 data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5, moments=20) print("Data Independent Epsilon:", data_ind_eps) print("Data Dependent Epsilon:", data_dep_eps)
Data Independent Epsilon: 411.5129254649703 Data Dependent Epsilon: 9.219308825046408
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
Where to Go From HereRead: - Algorithmic Foundations of Differential Privacy: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf - Deep Learning with Differential Privacy: https://arxiv.org/pdf/1607.00133.pdf - The Ethical Algorithm: https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205 Topics: - The Exponential Mechanism - The Moment's Accountant - Differentially Private Stochastic Gradient DescentAdvice: - For deployments - stick with public frameworks! - Join the Differential Privacy Community - Don't get ahead of yourself - DP is still in the early days Application of DP in Private Federated Learning DP works by adding statistical noise either at the input level or output level of the model so that you can mask out individual user contribution, but at the same time gain insight into th overall population without sacrificing privacy.> Case: Figure out average money one has in their pockets.We could go and ask someone how much they have in their wallet. They pick a random number between -100 and 100. Add that to the real value, say $20 and a picked number of 100. resulting in 120. That way, we have no way to know what the actual amount of money in their wallet is.When sufficiently large numbers of people submit these results, if we take the average, the noise will cancel out and we'll start seeing the true average.Apart from statistical use cases, we can apply DP in Private Federated learning.Suppose you want to train a model using distributed learning across a number of user devices. One way to do that is to get all the private data from the devices, but that's not very privacy friendly. Instead, we send the model from the server back to the devices. The devices will then train the modelusing their user data, and only send the privatized model updates back to the server.Server will then aggregate the updates and make an informed decision of the overall model on the server.As you do more and more rounds, slowly the model converges to the true population without private user data having to leave the devices.If you increase the level of privacy, the model converges a bit slower and vice versa. Project:For the final project for this section, you're going to train a DP model using this PATE method on the MNIST dataset, provided below.
import torchvision.datasets as datasets mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None) train_data = mnist_trainset.train_data train_targets = mnist_trainset.train_labels test_data = mnist_trainset.test_data test_targets = mnist_trainset.test_labels
/Users/atrask/anaconda/lib/python3.6/site-packages/torchvision/datasets/mnist.py:58: UserWarning: test_data has been renamed data warnings.warn("test_data has been renamed data") /Users/atrask/anaconda/lib/python3.6/site-packages/torchvision/datasets/mnist.py:48: UserWarning: test_labels has been renamed targets warnings.warn("test_labels has been renamed targets")
MIT
differential-privacy/differential_privacy.ipynb
gitgik/pytorch
0. PATH
os.getcwd() EXP_PATH = os.getcwd() # file directory FILE_NAME = 'entity2id.txt' # mapping file
_____no_output_____
Apache-2.0
jupyter_notebook/legacy/literal_embeddings.ipynb
rsgit95/med_kg_txt_multimodal
1. EhrKgNode2IdMapping
ehrkg_node2id_mapping = EhrKgNode2IdMapping(exp_path=EXP_PATH, file_name=FILE_NAME, kg_special_token_ids={"PAD":0,"MASK":1}, skip_first_line=True)
_____no_output_____
Apache-2.0
jupyter_notebook/legacy/literal_embeddings.ipynb
rsgit95/med_kg_txt_multimodal
get id2entity: dict
id2entity = ehrkg_node_mapping.get_id2entity()
_____no_output_____
Apache-2.0
jupyter_notebook/legacy/literal_embeddings.ipynb
rsgit95/med_kg_txt_multimodal
get id2literal: dict
id2literal = ehrkg_node_mapping.get_id2literal()
_____no_output_____
Apache-2.0
jupyter_notebook/legacy/literal_embeddings.ipynb
rsgit95/med_kg_txt_multimodal
2. EhrKgNode2EmbeddingMapping
model_name_or_path = GoogleBERT_MODELCARD[2] print(model_name_or_path) ehrkg_node2embedding_mapping = EhrKgNode2EmbeddingMapping(exp_path=EXP_PATH, file_name=FILE_NAME, kg_special_token_ids={"PAD":0,"MASK":1}, skip_first_line=True, model_name_or_path=model_name_or_path)
_____no_output_____
Apache-2.0
jupyter_notebook/legacy/literal_embeddings.ipynb
rsgit95/med_kg_txt_multimodal
get id2literalembeddings: dict
id2literalembeddings = ehrkg_node2embedding_mapping.get_literal_embeddings_from_model()
0%| | 0/9103 [00:00<?, ?it/s]Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. 100%|██████████| 9103/9103 [01:26<00:00, 104.68it/s]
Apache-2.0
jupyter_notebook/legacy/literal_embeddings.ipynb
rsgit95/med_kg_txt_multimodal
save id2literalembeddings
SAVE_FILE_DIR = os.getcwd() ehr_kg_embedding_mapping.save_literal_embeddings_from_model(save_file_dir=SAVE_FILE_DIR)
0%| | 0/9103 [00:00<?, ?it/s]Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. 100%|██████████| 9103/9103 [00:38<00:00, 236.54it/s]
Apache-2.0
jupyter_notebook/legacy/literal_embeddings.ipynb
rsgit95/med_kg_txt_multimodal
Lesson 9 Practice: Supervised Machine LearningUse this notebook to follow along with the lesson in the corresponding lesson notebook: [L09-Supervised_Machine_Learning-Lesson.ipynb](./L09-Supervised_Machine_Learning-Lesson.ipynb). InstructionsFollow along with the teaching material in the lesson. Throughout the tutorial sections labeled as "Tasks" are interspersed and indicated with the icon: ![Task](http://icons.iconarchive.com/icons/sbstnblnd/plateau/16/Apps-gnome-info-icon.png). You should follow the instructions provided in these sections by performing them in the practice notebook. When the tutorial is completed you can turn in the final practice notebook. For each task, use the cell below it to write and test your code. You may add additional cells for any task as needed or desired. Task 1a: SetupImport the following package sets:+ packages for data management+ pacakges for visualization+ packages for machine learningRemember to activate the `%matplotlib inline` magic.
%matplotlib inline # Data Management import numpy as np import pandas as pd # Visualization import seaborn as sns import matplotlib.pyplot as plt # Machine learning from sklearn import model_selection from sklearn import preprocessing from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
_____no_output_____
Apache-2.0
.ipynb_checkpoints/L09-Supervised_Machine_Learning-Practice-checkpoint.ipynb
Huiting120/Data-Analytics-With-Python
Task 2a: Data ExplorationAfter reviewing the data in sections 2.1, 2.2, 2.3 and 2.4 do you see any problems with this iris dataset? If so, please describe them in the practice notebook. If not, simply indicate that there are no issues. Task 2b: Make AssumptionsAfter reviewing the data in sections 2.1, 2.2, 2.3 and 2.4 are there any columns that would make poor predictors of species? **Hint**: columns that are poor predictors are:+ those with too many missing values+ those with no difference in variation when grouped by the outcome class+ variables with high levels of collinearity Task 3a: Practice with the random forest classifierNow that you have learned how to perform supervised machine learning using a variety of algorithms, lets practice using a new algorithm we haven't looked at yet: the Random Forest Classifier. The random forest classifier builds multiple decision trees and merges them together. Review the sklearn [online documentation for the RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). For this task:1. Perform a 10-fold cross-validation strategy to see how well the random forest classifier performs with the iris data2. Use a boxplot to show the distribution of accuracy3. Use the `fit` and `predict` functions to see how well it performs with the testing data.4. Plot the confusion matrix5. Print the classification report.
iris = sns.load_dataset('iris') X = iris.loc[:,'sepal_length':'petal_width'].values Y = iris['species'].values X = preprocessing.robust_scale(X) Xt, Xv, Yt, Yv = model_selection.train_test_split(X, Y, test_size=0.2, random_state=10) kfold = model_selection.KFold(n_splits=10, random_state=10) results = { 'LogisticRegression' : np.zeros(10), 'LinearDiscriminantAnalysis' : np.zeros(10), 'KNeighborsClassifier' : np.zeros(10), 'DecisionTreeClassifier' : np.zeros(10), 'GaussianNB' : np.zeros(10), 'SVC' : np.zeros(10), 'RandomForestClassifier': np.zeros(10) } results # Create the LogisticRegression object prepared for a multinomial outcome validation set. alg = RandomForestClassifier() # Execute the cross-validation strategy results['RandomForestClassifier'] = model_selection.cross_val_score(alg, Xt, Yt, cv=kfold, scoring="accuracy", error_score=np.nan) # Take a look at the scores for each of the 10-fold runs. results['RandomForestClassifier'] pd.DataFrame(results).plot(kind="box", rot=90); # Create the LinearDiscriminantAnalysis object with defaults. alg = RandomForestClassifier() # Create a new model using all of the training data. alg.fit(Xt, Yt) # Using the testing data, predict the iris species. predictions = alg.predict(Xv) # Let's see the predictions predictions accuracy_score(Yv, predictions) labels = ['versicolor', 'virginica', 'setosa'] cm = confusion_matrix(Yv, predictions, labels=labels) print(cm)
_____no_output_____
Apache-2.0
.ipynb_checkpoints/L09-Supervised_Machine_Learning-Practice-checkpoint.ipynb
Huiting120/Data-Analytics-With-Python
Basics of Jupyter NotebookThe cell type can be modified by the dropdown in the Jupyter editor in the toolbar.Use `` to drop the following text to a new line.To see syntax tips, type a function and press `SHIFT + TAB`.Publish a section by pressing `CTRL + TAB`. Examples of Python Code
my_name = "Elliot" hello_statement = f"Hello, {my_name}" print(hello_statement) x = 1 for i in range(1, 5): x = x + i print(f"i={i}, x={x}")
i=1, x=2 i=2, x=4 i=3, x=7 i=4, x=11
MIT
Jupyter Notebook/Basics of Jupyter Notebook.ipynb
ElliotRedhead/pythonmachinelearning
Much as any computer program can be ultimately reduced to a small set of binary operations on binary inputs (AND, OR, NOR, and so on), all transformations learned by deep neural networks can be reduced to a handful of tensor operations applied to tensors of numeric data. For instance, it’s possible to add tensors, multiply tensors, and so on. A Keras layer instance looks like this
Dense(512, activation='relu')
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
This layer can be interpreted as a function, which takes as input a matrix and returns another matrix — a new representation for the input tensor. Specifically, the function is as follows (where W is a matrix and b is a vector, both attributes of the layer). We have three tensor operations here: a dot product (dot) between the input tensor and a tensor named W; an addition (+) between the resulting matrix and a vector b; and, finally, a relu operation. relu(x) is max(x, 0)
# output = relu(dot(W, input) + b)
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Element-wise operations The **relu** operation and **addition** are element-wise operations: operations that are applied independently to each entry in the tensors being considered. This means these operations are highly amenable to massively parallel implementations. If you want to write a naive Python implementation of an element-wise operation, you use a for loop, as in this naive implementation of an element-wise **relu** operation:
def naive_relu(x): assert len(x.shape) == 2 x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] = max(x[i, j], 0) return x def naive_add(x, y): assert len(x.shape) == 2 assert x.shape == y.shape x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] += y[i, j] return x
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
On the same principle, you can do element-wise multiplication, subtraction, and so on. In practice, when dealing with NumPy arrays, these operations are available as well-optimized built-in NumPy functions, which themselves delegate the heavy lifting to a Basic Linear Algebra Subprograms (BLAS) implementation if you have one installed. BLAS are low-level, highly parallel, efficient tensor-manipulation routines that are typically implemented in Fortran or C. In NumPy, you can do the following element-wise operation, and it will be blazing fast
# z = x + y # z = np.maximum(z, 0)
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Time the difference:
x = np.random.random((20, 100)) y = np.random.random((20, 100)) time_start = time.time() for _ in range(1000): z = x + y z = np.maximum(z, 0) duration = time.time() - time_start print(f"Duration: {duration} sec") time_start = time.time() for _ in range(1000): z = naive_add(x, y) z = naive_relu(z) duration = time.time() - time_start print(f"Duration: {duration} sec")
Duration: 2.2906622886657715 sec
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Broadcasting When possible, and if there’s no ambiguity, the smaller tensor will be broadcasted to match the shape of the larger tensor. Broadcasting consists of two steps: Axes (called broadcast axes) are added to the smaller tensor to match the ndim of the larger tensor. The smaller tensor is repeated alongside these new axes to match the full shape of the larger tensor. Example - Consider X with shape (32, 10) and y with shape (10,). First, we add an empty first axis to y, whose shape becomes (1, 10). Then, we repeat y 32 times alongside this new axis, so that we end up with a tensor Y with shape (32, 10), where Y[i, :] == y for i in range(0, 32). At this point, we can proceed to add X and Y, because they have the same shape.
def naive_add_matrix_and_vector(x, y): assert len(x.shape) == 2 assert len(y.shape) == 1 assert x.shape[1] == y.shape[0] x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] += y[j] return x x = np.random.random((64, 3, 32, 10)) y = np.random.random((32, 10)) z = np.maximum(x, y)
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Tensor product The tensor product, or dot product (not to be confused with an element-wise product, the * operator) is one of the most common, most useful tensor operations. In NumPy, a tensor product is done using the np.dot function (because the mathematical notation for tensor product is usually a dot).
x = np.random.random((32,)) y = np.random.random((32,)) z = np.dot(x, y) z # naive interpretation of two vectors def naive_vector_dot(x, y): assert len(x.shape) == 1 assert len(y.shape) == 1 assert x.shape[0] == y.shape[0] z = 0. for i in range(x.shape[0]): z += x[i] * y[i] return z zz = naive_vector_dot(x, y) zz # naive interpretation of matrix and vector def naive_matrix_vector_dot(x, y): assert len(x.shape) == 2 assert len(y.shape) == 1 assert x.shape[1] == y.shape[0] z = np.zeros(x.shape[0]) for i in range(x.shape[0]): for j in range(x.shape[1]): z[i] += x[i, j] * y[j] return z
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
As soon as one of the two tensors has an ndim greater than 1, dot is no longer symmetric, which is to say that dot(x, y) isn’t the same as dot(y, x) The most common applications may be the dot product between two matrices. You can take the dot product of two matrices x and y (dot(x, y)) if and only if x.shape[1] == y.shape[0] (mn nm). The result is a matrix with shape (x.shape[0], y.shape[1]), where the coefficients are the vector products between the rows of x and the columns of y. Here’s the naive implementation:
def naive_matrix_dot(x, y): assert len(x.shape) == 2 assert len(y.shape) == 2 assert x.shape[1] == y.shape[0] z = np.zeros((x.shape[0], y.shape[1])) for i in range(x.shape[0]): for j in range(y.shape[1]): row_x = x[i, :] column_y = y[:, j] z[i, j] = naive_vector_dot(row_x, column_y) return z
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Tensor reshaping Reshaping a tensor means rearranging its rows and columns to match a target shape. Naturally, the reshaped tensor has the same total number of coefficients as the initial tensor. Reshaping is best understood via simple examples:
x = np.array([[0., 1.], [2., 3.], [4., 5.]]) print(x.shape) x = x.reshape((6, 1)) x x = x.reshape((2, 3)) x
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
A special case of reshaping that’s commonly encountered is transposition. Transposing a matrix means exchanging its rows and its columns, so that x[i, :] becomes x[:, i]:
x = np.zeros((300, 20)) print(x.shape) x = np.transpose(x) print(x.shape)
(20, 300)
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Geometric interpretation of tensor operations Because the contents of the tensors manipulated by tensor operations can be interpreted as coordinates of points in some geometric space, all tensor operations have a geometric interpretation. For instance, let’s consider addition. We’ll start with the following vector: The engine of neural networks: gradient-based optimization Derivative of a tensor operation: the gradient Stochastic gradient descent Chaining derivatives: the Backpropagation algorithm The chain rule The Gradient Tape in TensorFlow - The API through which you can leverage TensorFlow’s powerful automatic differentiation capabilities is the GradientTape.
x = tf.Variable(0.) with tf.GradientTape() as tape: y = 2 * x + 3 grad_of_y_wrt_x = tape.gradient(y, x) grad_of_y_wrt_x W = tf.Variable(tf.random.uniform((2, 2))) b = tf.Variable(tf.zeros((2,))) x = tf.random.uniform((2, 2)) with tf.GradientTape() as tape: y = tf.matmul(W, x) + b grad_of_y_wrt_W_and_b = tape.gradient(y, [W, b]) grad_of_y_wrt_W_and_b (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') / 255 model = models.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(10, activation='softmax') ]) model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics="accuracy") model.fit(train_images, train_labels, epochs=5, batch_size=128)
Epoch 1/5 469/469 [==============================] - 4s 8ms/step - loss: 0.2586 - accuracy: 0.9244 Epoch 2/5 469/469 [==============================] - 4s 8ms/step - loss: 0.1050 - accuracy: 0.9682 Epoch 3/5 469/469 [==============================] - 4s 8ms/step - loss: 0.0687 - accuracy: 0.9792 Epoch 4/5 469/469 [==============================] - 4s 8ms/step - loss: 0.0490 - accuracy: 0.9851 Epoch 5/5 469/469 [==============================] - 4s 8ms/step - loss: 0.0369 - accuracy: 0.9891
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Implementing from scratch in TensorFlow Let’s implement a simple Python class NaiveDense that creates two TensorFlow variables W and b, and exposes a call method that applies the above transformation.
class NaiveDense: def __init__(self, input_size, output_size, activation): self.activation = activation w_shape = (input_size, output_size) # create a matrix W of shape "(input_size, output_size)", initialized with random values w_initial_value = tf.random.uniform(w_shape, minval=0, maxval=1e-1) self.W = tf.Variable(w_initial_value) b_shape = (output_size,) # create a vector b os shape (output_size, ), initialized with zeros b_initial_value = tf.zeros(b_shape) self.b = tf.Variable(b_initial_value) def __call__(self, inputs): # apply the forward pass return self.activation(tf.matmul(inputs, self.W) + self.b) @property def weights(self): # convinience method for rettrieving the layer weights return [self.W, self.b]
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
A simple Sequential class - create a NaiveSequential class to chain these layers. It wraps a list of layers, and exposes a call methods that simply call the underlying layers on the inputs, in order. It also features a weights property to easily keep track of the layers' parameters.
class NaiveSequential: def __init__(self, layers): self.layers = layers def __call__(self, inputs): x = inputs for layer in self.layers: x = layer(x) return x @property def weights(self): weights = [] for layer in self.layers: weights += layer.weights return weights
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Using this NaiveDense class and this NaiveSequential class, we can create a mock Keras model:
model = NaiveSequential([ NaiveDense(input_size=28 * 28, output_size=512, activation=tf.nn.relu), NaiveDense(input_size=512, output_size=10, activation=tf.nn.softmax) ]) assert len(model.weights) == 4
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
A batch generator Next, we need a way to iterate over the MNIST data in mini-batches. This is easy:
class BatchGenerator: def __init__(self, images, labels, batch_size=128): self.index = 0 self.images = images self.labels = labels self.batch_size = batch_size def next(self): images = self.images[self.index : self.index + self.batch_size] labels = self.labels[self.index : self.index + self.batch_size] self.index += self.batch_size return images, labels
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Running one training step The most difficult part of the process is the “training step”: updating the weights of the model after running it on one batch of data. We need to: 1. Compute the predictions of the model for the images in the batch 2. Compute the loss value for these predictions given the actual labels 3. Compute the gradient of the loss with regard to the model’s weights 4. Move the weights by a small amount in the direction opposite to the gradient To compute the gradient, we will use the TensorFlow GradientTape object
learning_rate = 1e-3 def update_weights(gradients, weights): for g, w in zip(gradients, weights): w.assign_sub(w * learning_rate) # assign_sub is the equivalent of -= for TensorFlow variables def one_training_step(model, images_batch, labels_batch): with tf.GradientTape() as tape: # run the "forward pass" (compute the model's predictions under the GradientTape scope) predictions = model(images_batch) per_sample_losses = tf.keras.losses.sparse_categorical_crossentropy( labels_batch, predictions) average_loss = tf.reduce_mean(per_sample_losses) gradients = tape.gradient(average_loss, model.weights) # compute the gradient of the loss with regard to the weights. The output gradients is a list where each entry corresponds to a weight from the models.weights list update_weights(gradients, model.weights) # update the weights using the gradients return average_loss
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
In practice, you will almost never implement a weight update step like this by hand. Instead, you would use an Optimizer instance from Keras. Like this:
optimizer = optimizers.SGD(learning_rate=1e-3) def update_weights(gradients, weights): optimizer.apply_gradients(zip(gradients, weights))
_____no_output_____
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
The full training loop An epoch of training simply consists of the repetition of the training step for each batch in the training data, and the full training loop is simply the repetition of one epoch:
def fit(model, images, labels, epochs, batch_size=128): for epoch_counter in range(epochs): print('Epoch %d' % epoch_counter) batch_generator = BatchGenerator(images, labels) for batch_counter in range(len(images) // batch_size): images_batch, labels_batch = batch_generator.next() loss = one_training_step(model, images_batch, labels_batch) if batch_counter % 100 == 0: print('loss at batch %d: %.2f' % (batch_counter, loss)) from tensorflow.keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype('float32') / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype('float32') / 255 fit(model, train_images, train_labels, epochs=10, batch_size=128)
Epoch 0 loss at batch 0: 6.83 loss at batch 100: 2.24 loss at batch 200: 2.21 loss at batch 300: 2.12 loss at batch 400: 2.22 Epoch 1 loss at batch 0: 1.93 loss at batch 100: 1.89 loss at batch 200: 1.84 loss at batch 300: 1.75 loss at batch 400: 1.84 Epoch 2 loss at batch 0: 1.61 loss at batch 100: 1.59 loss at batch 200: 1.52 loss at batch 300: 1.46 loss at batch 400: 1.53 Epoch 3 loss at batch 0: 1.33 loss at batch 100: 1.35 loss at batch 200: 1.26 loss at batch 300: 1.24 loss at batch 400: 1.29 Epoch 4 loss at batch 0: 1.12 loss at batch 100: 1.16 loss at batch 200: 1.05 loss at batch 300: 1.07 loss at batch 400: 1.12 Epoch 5 loss at batch 0: 0.97 loss at batch 100: 1.02 loss at batch 200: 0.90 loss at batch 300: 0.95 loss at batch 400: 1.00 Epoch 6 loss at batch 0: 0.86 loss at batch 100: 0.91 loss at batch 200: 0.80 loss at batch 300: 0.85 loss at batch 400: 0.91 Epoch 7 loss at batch 0: 0.77 loss at batch 100: 0.83 loss at batch 200: 0.72 loss at batch 300: 0.78 loss at batch 400: 0.84 Epoch 8 loss at batch 0: 0.71 loss at batch 100: 0.76 loss at batch 200: 0.65 loss at batch 300: 0.72 loss at batch 400: 0.79 Epoch 9 loss at batch 0: 0.66 loss at batch 100: 0.70 loss at batch 200: 0.60 loss at batch 300: 0.67 loss at batch 400: 0.75
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Evaluating the model We can evaluate the model by taking the argmax of its predictions over the test images, and comparing it to the expected labels:
predictions = model(test_images) predictions = predictions.numpy() # calling .numpy() to a TensorFlow tensor converts it to a NumPy tensor predicted_labels = np.argmax(predictions, axis=1) matches = predicted_labels == test_labels # print('accuracy: %.2f' % matches.average()) print(f"Accuracy: {np.average(matches)}")
Accuracy: 0.8318
MIT
deep-learning-with-python-book/ch2.math-building-blocks-of-nn/03_tensor_operations.ipynb
plamenti/deep_learning_project_manning
Grupos de features:- **Categorial Ordinal:** - TP_ (17-4-3 = 10) - Questions: ```["Q001", "Q002", "Q003", "Q004", "Q005","Q006", "Q007", "Q008", "Q009", "Q010", "Q011", "Q012", "Q013", "Q014", "Q015", "Q016", "Q017", "Q019", "Q022", "Q024"]``` (20) - **Categorial Nominal:** - IN_ : All Binary (52) - TP_ : ```["TP_SEXO", "TP_ESTADO_CIVIL", "TP_COR_RACA", "TP_NACIONALIDADE"]``` (4) - SG_ : (4-1 = 3) - Questions: ```["Q018", "Q020", "Q021", "Q023", "Q025"]``` (5) - **Numerical:** - NU_IDADE (1)- Droped: - Identificator: ```[NU_INSCRICAO]``` (1) - More than 40% missing: ```['CO_ESCOLA', 'NO_MUNICIPIO_ESC', 'SG_UF_ESC', 'TP_DEPENDENCIA_ADM_ESC', 'TP_LOCALIZACAO_ESC', 'TP_SIT_FUNC_ESC']``` (4) - NO_M: (To many categories): ```['NO_MUNICIPIO_RESIDENCIA', 'NO_MUNICIPIO_NASCIMENTO', 'NO_MUNICIPIO_PROVA']``` (3) - NU_NOTA: Targets variables (5)
train_df = pd.read_parquet("data/train.parquet") clean_target(train_df) #test= pd.read_parquet("data/test.parquet") categorical_ordinal_columns = get_categorical_ordinal_columns(train_df) qtd_categorical_ordinal_columns=len(categorical_ordinal_columns) print(f"Number of categorial ordinal features: {qtd_categorical_ordinal_columns}") categorical_nominal_columns = get_categorical_nominal_columns(train_df) qtd_categorical_nominal_columns = len(categorical_nominal_columns) print(f"Number of categorial nominal features: {qtd_categorical_nominal_columns}") drop_columns = ["NU_INSCRICAO", "CO_ESCOLA", "NO_MUNICIPIO_ESC", "SG_UF_ESC", "TP_DEPENDENCIA_ADM_ESC", "TP_LOCALIZACAO_ESC", "TP_SIT_FUNC_ESC", "NO_MUNICIPIO_RESIDENCIA", "NO_MUNICIPIO_NASCIMENTO", "NO_MUNICIPIO_PROVA"] qtd_drop_columns = len(drop_columns) print(f"Number of columns dropped: {qtd_drop_columns}") target_columns = train_df.filter(regex="NU_NOTA").columns.tolist() qtd_target_columns = len(target_columns) print(f"Number of targets: {qtd_target_columns}") numerical_columns = ["NU_IDADE"] qtd_numerical_columns = len(numerical_columns) print(f"Number of targets: {qtd_numerical_columns}") target_columns = train_df.filter(regex="NU_NOTA").columns.tolist() qtd_target_columns = len(target_columns) print(f"Number of targets: {qtd_target_columns}") all_columns = drop_columns + categorical_nominal_columns + categorical_ordinal_columns + numerical_columns + target_columns qtd_total = qtd_drop_columns + qtd_categorical_nominal_columns + qtd_categorical_ordinal_columns + qtd_numerical_columns + qtd_target_columns print(f"Total columns: {qtd_total}")
Total columns: 110
MIT
pipeline_2.ipynb
rocabrera/kaggles_enem
**Create Pipeline**
""" Variáveis categóricas com dados ordinais que tem dados faltantes: - TP_ENSINO: Suposto que NaN representa a categoria faltante descrita nos metadados. - TP_STATUS_REDACAO: Mapeado para outra classe (Faltou na prova) """ categorical_ordinal_pipe = Pipeline([ ('selector', ColumnSelector(categorical_ordinal_columns)), ('imputer', SimpleImputer(missing_values=np.nan, strategy='constant', fill_value=0)), ('encoder', OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1)) ]) """ Variáveis categóricas com dados ordinais que tem dados faltantes: - SG_UF_NASCIMENTO: Mapeado para uma nova categoria """ categorical_nominal_pipe = Pipeline([ ('selector', ColumnSelector(categorical_nominal_columns)), ('imputer', SimpleImputer(missing_values=np.nan, strategy='constant', fill_value="missing")), ('encoder', OneHotEncoder(drop="first", handle_unknown='ignore')) ]) numerical_pipe = Pipeline([ ('selector', ColumnSelector(numerical_columns)), ('imputer', SimpleImputer(missing_values=np.nan, strategy='constant', fill_value=0)), ('scaler', MinMaxScaler()) ]) preprocessor = FeatureUnion([ ('categorical_ordinal', categorical_ordinal_pipe), ('categorical_nominal', categorical_nominal_pipe), ('numerical', numerical_pipe) ]) kwargs_regressor = {"n_estimators":50, "n_jobs":-1, "verbose":2} pipe = Pipeline([ ('preprocessor', preprocessor), ('feature_selection', VarianceThreshold(threshold=0.05)), ('model', RandomForestRegressor(**kwargs_regressor)) ]) n_samples = 1000 X = train_df.sample(n_samples).drop(columns=target_columns+drop_columns) y = train_df.sample(n_samples).filter(regex="NU_NOTA") X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) def split_target(y): y_nu_nota_cn = y["NU_NOTA_CN"] y_nu_nota_ch = y["NU_NOTA_CH"] y_nu_nota_lc = y["NU_NOTA_LC"] y_nu_nota_mt = y["NU_NOTA_MT"] y_nu_nota_redacao = y["NU_NOTA_REDACAO"] return (y_nu_nota_cn, y_nu_nota_ch, y_nu_nota_lc, y_nu_nota_mt, y_nu_nota_redacao) y_train_cn, y_train_ch, y_train_lc, y_train_mt, y_train_redacao = split_target(y_train) y_test_cn, y_test_ch, y_test_lc, y_test_mt, y_test_redacao = split_target(y_test) y_structure = {"NU_NOTA_CN":[y_train_cn, y_test_cn], "NU_NOTA_CH":[y_train_ch, y_test_ch], "NU_NOTA_LC":[y_train_lc, y_test_lc], "NU_NOTA_MT":[y_train_mt, y_test_mt], "NU_NOTA_REDACAO":[y_train_redacao, y_test_redacao]} from joblib import dump for key, ys in tqdm(y_structure.items()): pipe.fit(X_train, ys[0]) dump(pipe, f"models/model_{key}.joblib") y_train_hat = pipe.predict(X_train) ys.append(y_train_hat) y_test_hat = pipe.predict(X_test) ys.append(y_test_hat) for key, ys in tqdm(y_structure.items()): train_error = mean_squared_error(ys[0], ys[2], squared=False) test_error = mean_squared_error(ys[1], ys[3], squared=False) print(key) print(f"Train: {train_error}") print(f"Test: {test_error}\n")
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 1737.78it/s]
MIT
pipeline_2.ipynb
rocabrera/kaggles_enem
Defining custom early stopper classes for early stopping of model.fit keras method
class CustomStopper(keras.callbacks.EarlyStopping): def __init__(self, monitor='val_loss', min_delta=0, patience=10, verbose=0, mode='auto', start_epoch = 30): # add argument for starting epoch super(CustomStopper, self).__init__() self.start_epoch = start_epoch def on_epoch_end(self, epoch, logs=None): if epoch > self.start_epoch: super().on_epoch_end(epoch, logs)
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Defining variables to be passes in the various methods
filename = 'Data_v1.xlsx' #file name of the uploaded dataset file modelName = 'Model1' #name of the model, this will be used to save model evaluation and history numEpochs = 150 # maximum number of epochs if early stopping doesnt work batchsize = 50 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adadelta' #optimizer to be used in model.fit keras method
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Method to read uploaded file. Returns back the text input samples and target labels for each. Transforms X to a vector which holds the number of occurences of each word for every sample
def mypreprocessor(text): porter_stemmer = PorterStemmer() words=re.split("\\s+",text) stemmed_words=[porter_stemmer.stem(word=word) for word in words] return ' '.join(stemmed_words) def Preprocessing(): X = pd.read_excel(list(uploaded.items())[0][0],usecols="H") #pass usecols as the column containing all the training samples y = pd.read_excel(list(uploaded.items())[0][0],usecols="F") #pass usecols as the column containing all the target labels X = [str(i) for i in X.extracted_text.to_list()] #the property used with X. should match column name in excel # for i in range(len(X)): # X[i] = re.sub(r'(\s\d+\s)|{\d+}|\(\d+\)','',X[i]) # X[i] = re.sub(r'gain-of-function|gain of function|toxic gain of function|activating mutation|constitutively active|hypermorph|ectopic expression|neomorph|gain of interaction|function protein|fusion transcript','GOF',X[i]) # X[i] = re.sub(r'haploinsufficiency|haploinsufficient|hypomorph|amorph|null mutation|hemizygous','HI',X[i]) # X[i] = re.sub(r'dominant-negative|dominant negative|antimorph','DN',X[i]) # X[i] = re.sub(r'loss of function|loss-of-function','LOF',X[i]) # X = preprocess_data(X) y = y.mutation_consequence.to_list() # vocabulary = ['gain-of-function','gain of function', # 'toxic gain of function','activating mutation', # 'constitutively active','hypermorph','ectopic expression', # 'neomorph','gain of interaction','function protein','fusion transcript', # 'haploinsufficiency','haploinsufficient','hypomorph','amorph', # 'null mutation','hemizygous','dominant-negative','dominant negative','antimorph', # 'loss of function','loss-of-function'] X=TfidfVectorizer(X,preprocessor=mypreprocessor,max_df=200 ,ngram_range=(1, 2)).fit(X).transform(X) # X=CountVectorizer(X,preprocessor=mypreprocessor,max_df=200 ,ngram_range=(1, 2)).fit(X).transform(X) return X, y
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Method to split the dataset into training and testing. Changes y to one-hot encoded vector, e.g if target class is 3, then returns [0,0,0,1,0] for 5 target classes
def TrainTestSplit(X, y): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=100, stratify = y) #split the dataset, test_Size variable defines the size of the test dataset, stratify column makes sure even distribution of target labels X_train = X_train.toarray() #changing to numpy array to work with keras sequential model X_test = X_test.toarray() #changing to numpy array to work with keras sequential model le = LabelEncoder() y_train = to_categorical(le.fit(y_train).transform(y_train)) y_test = to_categorical(le.fit(y_test).transform(y_test)) return X_train, X_test, y_train, y_test, le.classes_ # returns training and test datasets, as well as class names
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Defining the model to be used for training the datasets.
def ModelBuild(X, y): inputs = keras.layers.Input(shape=(len(X_train[0]),)) dense1 = keras.layers.Dense(200, activation="relu")(inputs) #fully connected with input vectors # dropout = keras.layers.Dropout(0.2)(dense1) #regularization layer if required dense2 = keras.layers.Dense(50, activation="relu")(dense1) #fully connected with Layer 1 # dropout2 = keras.layers.Dropout(0.1)(dense2) #regularization layer if required # dense3 = keras.layers.Dense(50, activation="relu")(dense2) outputs = keras.layers.Dense(len(y_train[0]), activation="sigmoid")(dense2) #output layer model = keras.Model(inputs=inputs, outputs=outputs) return model
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Method to show summary of the model as well as the shape in diagram form
def PlotModel(model, filename): model.summary() keras.utils.plot_model(model, filename, show_shapes=True)
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Method to compile the defined model as well as run the training. Returns a history variable which can be used to plot training and validation loss as well as accuracy at every epoch
def PlotTraining(model, X_test, y_test): model.compile(loss='categorical_crossentropy',optimizer=optimizer,metrics=[keras.metrics.CategoricalAccuracy(),'accuracy']) # EarlyStoppage = CustomStopper() es = keras.callbacks.EarlyStopping(monitor='val_accuracy', baseline=0.7, patience=30) history = model.fit(X_train, y_train,validation_split=0.2,epochs=numEpochs, batch_size=batchsize) #,callbacks = [es] ) - use this for early stopping model.evaluate(X_test, y_test) return history
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Plots the validation and training accuracy at every epoch using a history object obtained by model.fit in the previous step
def plot(history): # list all data in history print(history.keys()) # summarize history for accuracy plt.plot(history['accuracy']) plt.plot(history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # summarize history for loss plt.plot(history['loss']) plt.plot(history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show()
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Calling the methods to run all the required steps in the pipeline
X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) # svd = sklearn.decomposition.TruncatedSVD(n_components=60, n_iter=5, random_state=42) # X_train = svd.fit(X_train).transform(X_train) # svd = sklearn.decomposition.TruncatedSVD(n_components=60, n_iter=5, random_state=42) # X_test = svd.fit(X_test).transform(X_test) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi)
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Running above solution with reduced text and preprocessing
uploaded = files.upload() modelName = 'Model2' #name of the model, this will be used to save model evaluation and history numEpochs = 150 # maximum number of epochs if early stopping doesnt work batchsize = 50 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adam' #optimizer to be used in model.fit keras method
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Calling the above pipeline again with new parameters
X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi)
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Running the model with TF-IDF vectorizer instead of CountVectorizer (adam optimizer)
uploaded = files.upload() modelName = 'Model3' #name of the model, this will be used to save model evaluation and history numEpochs = 150 # maximum number of epochs if early stopping doesnt work batchsize = 20 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adam' #optimizer to be used in model.fit keras method from sklearn.feature_extraction.text import TfidfVectorizer
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
For the next step, go to Preprocessing method and change CountVectorizer to TfIdfVectorizer
X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi) print(pickle.load(open('Model3_ClassificationReport','rb')))
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Running model with adadelta optimizer and tfidf vectorizer
modelName = 'Model5' #name of the model, this will be used to save model evaluation and history numEpochs = 200 # maximum number of epochs if early stopping doesnt work batchsize = 50 # batchsize which will be used in each step by optimizer defined in the model optimizer = 'adam' #optimizer to be used in model.fit keras method X, y = Preprocessing() X_train, X_test, y_train, y_test, ClassNames = TrainTestSplit(X, y) model = ModelBuild(X_train, y_train) PlotModel(model, modelName +".png") history = PlotTraining(model, X_test, y_test) print(confusion_matrix(y_test.argmax(axis=-1),model.predict(X_test).argmax(axis=-1))) print(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1),target_names=ClassNames)) X_train.shape with open('/content/%s' %modelName, 'wb') as file_pi: pickle.dump(history.history, file_pi) history = pickle.load(open('/content/%s' % modelName, "rb")) model.save(modelName +'.h5') plot(history) with open('/content/%s_train' %modelName, 'wb') as file_pi: pickle.dump(X_train, file_pi) with open('/content/%s_test' %modelName, 'wb') as file_pi: pickle.dump(X_test, file_pi) with open('/content/%s_Labeltest' %modelName, 'wb') as file_pi: pickle.dump(y_test, file_pi) with open('/content/%s_LabelTrain' %modelName, 'wb') as file_pi: pickle.dump(y_train, file_pi) with open('/content/%s_ConfusionMatrix' %modelName, 'wb') as file_pi: pickle.dump(confusion_matrix(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1)), file_pi) with open('/content/%s_ClassificationReport' %modelName, 'wb') as file_pi: pickle.dump(classification_report(y_test.argmax(axis=-1), model.predict(X_test).argmax(axis=-1), target_names=ClassNames), file_pi)
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Code to download the files as zip folders (to load the models and datasets for prediction/evaluation use pickle.load)
# !zip -r '/Model1.zip' 'Model1Folder' files.download('/Model1.zip') !zip -r '/Model2.zip' 'Model2Folder' files.download('/Model2.zip') !zip -r '/Model3.zip' 'Model3Folder' files.download('/Model3.zip') !zip -r '/Model4.zip' 'Model4Folder' files.download('/Model4.zip') !zip -r '/Model5.zip' 'Model5Folder' files.download('/Model5.zip')
_____no_output_____
Apache-2.0
final_project_files/code/NLP_Project_Neural_Network_Model.ipynb
PrudhviVajja/Mutation_Prediction
Time handlingLast year in this course, people asked: "how do you handle times?" That's a good question... ExerciseWhat is the ambiguity in these cases?1. Meet me for lunch at 12:002. The meeting is at 14:003. How many hours are between 01:00 and 06:00 (in the morning)4. When does the new year start?Local times are a *political* construction and subject to change. They differ depending on where you are. Human times are messy. If you try to do things with human times, you can expect to be sad.But still, *actual* time advances at the same rate all over the world (excluding relativity). There *is* a way to do this. What are timezones?A timezone specifies a certain *local time* at a certain location on earth.If you specify a timestamp such as 14:00 on 1 October 2019, it is **naive** if it does not include a timezone. Dependon on where you are standing, you can experience this timestamp at different times.If it include a timezone, it is **aware**. An aware timestamp exactly specifies a certain time across the whole world (but depending on where you are standing, your localtime may be different).**UTC** (coordinated universal time) is a certain timezone - the basis of all other timezones.Unix computers have a designated **localtime** timezone, which is used by default to display things. This is in the `TZ` environment variable.The **tz database** (or zoneinfo) is a open source, comprehensive, updated catalog of all timezones across the whole planet since 1970. It contains things like `EET`, `EEST`, but also geographic locations like `Europe/Helsinki` because the abbreviations can change. [Wikipedia](https://en.wikipedia.org/wiki/Tz_database) and [list of all zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). unixtimeUnixtime is zero at 00:00 on 1 January 1970, and increases at a rate of one per second. This definition defines a single unique time everywhere in the world. You can find unixtime with the `date +%s` command:
!date +%s
1570084871
CC0-1.0
python/11_Time_handling.ipynb
AaltoScienceIT/python-r-data-analysis-course
You can convert from unixtime to real (local) time using the date command again
!date -d @1234567890
Sat Feb 14 01:31:30 EET 2009
CC0-1.0
python/11_Time_handling.ipynb
AaltoScienceIT/python-r-data-analysis-course
There are functions which take (unixtime + timezone) and produce the timestamp (year, month, day, hour, minute, second). And vice versa. Unix time has two main benefits:* Un-ambiguous: defines a single time* You can do math on the times and compute differences, add time, etc, and it just works. RecommendationsWhen you have times, always store them in unixtime in numerical format.When you need a human time (e.g. "what hour was this time"), you use a function to compute that property *in a given timezone*.If you store the other time components, for example hour and minute, this is just for convenience and you should *not* assume that you can go back to the unixtime to do math.[Richard's python time reference](http://rkd.zgib.net/wiki/DebianNotes/PythonTime) is the only comprehensive cataloging of Python that he knows of. ExercisesTo do these, you have to search for the functions yourself. 1. Convert this unixtime to localtime in Helsinki
ts = 1570078806
_____no_output_____
CC0-1.0
python/11_Time_handling.ipynb
AaltoScienceIT/python-r-data-analysis-course
2. Convert the same time to UTC Convert that unixtime to a pandas `Timestamp`You'll need to search the docs some... Localization and conversionIf you are given a time like "14:00 1 October 2019", and you want to convert it to a different timezone, can you? No, because there is no timezone already. You have to **localize** it by applying a timezone, then you can convert.
import pytz tz = pytz.timezone("Asia/Tokyo") tz # Make a timestamp from a real time. We dont' know when this is... import pandas as pd import datetime dt = pd.Timestamp(datetime.datetime(2019, 10, 1, 14, 0)) dt dt.timestamp() # Localize it - interpert it as a certain timezone localized = dt.tz_localize(tz) localized dt.timestamp() converted = localized.tz_convert(pytz.timezone('Europe/Helsinki')) converted
_____no_output_____
CC0-1.0
python/11_Time_handling.ipynb
AaltoScienceIT/python-r-data-analysis-course
And we notice it does the conversion... if we don't localize first, then this doesn't work. Exercises 1. Convert this timestamp to a pandas timestamp in Europe/Helsinki and Asia/Tokyo
ts = 1570078806
_____no_output_____
CC0-1.0
python/11_Time_handling.ipynb
AaltoScienceIT/python-r-data-analysis-course
Print the day of the year and hour of this unixtime From the command line
!date !date -d "15:00" !date -d "15:00 2019-10-31" !date -d "15:00 2019-10-31" +%s !date -d @1572526800 !TZ=America/New_York date -d @1572526800 !date -d '2019-10-01 14:00 CEST'
Tue Oct 1 15:00:00 EEST 2019
CC0-1.0
python/11_Time_handling.ipynb
AaltoScienceIT/python-r-data-analysis-course
Hyper Parameter TuningOne of the primary objective and challenge in machine learning process is improving the performance score, based on data patterns and observed evidence. To achieve this objective, almost all machine learning algorithms have specific set of parameters that needs to estimate from dataset which will maximize the performance score. The best way to choose good hyperparameters is through trial and error of all possible combination of parameter values. Scikit-learn provide GridSearch and RandomSearch functions to facilitate automatic and reproducible approach for hyperparameter tuning.
from IPython.display import Image Image(filename='../Chapter 4 Figures/Hyper_Parameter_Tuning.png', width=1000)
_____no_output_____
MIT
jupyter_notebooks/machine_learning/ebook_mastering_ml_in_6_steps/Chapter_4_Code/Code/Hyper_Parameter_Tuning.ipynb
manual123/Nacho-Jupyter-Notebooks
GridSearch
import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.cross_validation import train_test_split from sklearn import cross_validation from sklearn import metrics from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt %matplotlib inline from sklearn.ensemble import RandomForestClassifier from sklearn.grid_search import GridSearchCV seed = 2017 # read the data in df = pd.read_csv("Data/Diabetes.csv") X = df.ix[:,:8].values # independent variables y = df['class'].values # dependent variables #Normalize X = StandardScaler().fit_transform(X) # evaluate the model by splitting into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=seed) kfold = cross_validation.StratifiedKFold(y=y_train, n_folds=5, random_state=seed) num_trees = 100 clf_rf = RandomForestClassifier(random_state=seed).fit(X_train, y_train) rf_params = { 'n_estimators': [100, 250, 500, 750, 1000], 'criterion': ['gini', 'entropy'], 'max_features': [None, 'auto', 'sqrt', 'log2'], 'max_depth': [1, 3, 5, 7, 9] } # setting verbose = 10 will print the progress for every 10 task completion grid = GridSearchCV(clf_rf, rf_params, scoring='roc_auc', cv=kfold, verbose=10, n_jobs=-1) grid.fit(X_train, y_train) print 'Best Parameters: ', grid.best_params_ results = cross_validation.cross_val_score(grid.best_estimator_, X_train,y_train, cv=kfold) print "Accuracy - Train CV: ", results.mean() print "Accuracy - Train : ", metrics.accuracy_score(grid.best_estimator_.predict(X_train), y_train) print "Accuracy - Test : ", metrics.accuracy_score(grid.best_estimator_.predict(X_test), y_test)
C:\Users\Manoh\Anaconda2\lib\site-packages\sklearn\cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning) C:\Users\Manoh\Anaconda2\lib\site-packages\sklearn\grid_search.py:43: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20. DeprecationWarning)
MIT
jupyter_notebooks/machine_learning/ebook_mastering_ml_in_6_steps/Chapter_4_Code/Code/Hyper_Parameter_Tuning.ipynb
manual123/Nacho-Jupyter-Notebooks
RandomSearch
from sklearn.model_selection import RandomizedSearchCV from scipy.stats import randint as sp_randint # specify parameters and distributions to sample from param_dist = {'n_estimators':sp_randint(100,1000), 'criterion': ['gini', 'entropy'], 'max_features': [None, 'auto', 'sqrt', 'log2'], 'max_depth': [None, 1, 3, 5, 7, 9] } # run randomized search n_iter_search = 20 random_search = RandomizedSearchCV(clf_rf, param_distributions=param_dist, cv=kfold, n_iter=n_iter_search, verbose=10, n_jobs=-1, random_state=seed) random_search.fit(X_train, y_train) # report(random_search.cv_results_) print 'Best Parameters: ', random_search.best_params_ results = cross_validation.cross_val_score(random_search.best_estimator_, X_train,y_train, cv=kfold) print "Accuracy - Train CV: ", results.mean() print "Accuracy - Train : ", metrics.accuracy_score(random_search.best_estimator_.predict(X_train), y_train) print "Accuracy - Test : ", metrics.accuracy_score(random_search.best_estimator_.predict(X_test), y_test) from bayes_opt import BayesianOptimization from sklearn.cross_validation import cross_val_score def rfccv(n_estimators, min_samples_split, max_features): return cross_val_score(RandomForestClassifier(n_estimators=int(n_estimators), min_samples_split=int(min_samples_split), max_features=min(max_features, 0.999), random_state=2017), X_train, y_train, 'f1', cv=kfold).mean() gp_params = {"alpha": 1e5} rfcBO = BayesianOptimization(rfccv, {'n_estimators': (100, 1000), 'min_samples_split': (2, 25), 'max_features': (0.1, 0.999)}) rfcBO.maximize(n_iter=10, **gp_params) print('RFC: %f' % rfcBO.res['max']['max_val'])
Initialization ------------------------------------------------------------------------------------- Step | Time | Value | max_features | min_samples_split | n_estimators | 1 | 00m13s |  0.59033 |  0.1628 |  2.7911 |  891.4580 | 2 | 00m08s | 0.57056 | 0.1725 | 4.1269 | 543.9055 | 3 | 00m04s |  0.61064 |  0.7927 |  21.6962 |  275.7203 | 4 | 00m06s | 0.58312 | 0.2228 | 6.4325 | 437.6023 | 5 | 00m03s |  0.61265 |  0.3626 |  12.2393 |  236.6017 | Bayesian Optimization ------------------------------------------------------------------------------------- Step | Time | Value | max_features | min_samples_split | n_estimators | 6 | 00m17s |  0.61354 |  0.6776 |  22.9885 |  999.9903 | 7 | 00m01s | 0.60445 | 0.3997 | 22.0831 | 100.0026 | 8 | 00m16s |  0.61529 |  0.7174 |  21.0355 |  999.9898 | 9 | 00m01s |  0.61976 |  0.4951 |  2.7633 |  100.0283 | 10 | 00m17s |  0.62833 |  0.5922 |  2.0234 |  999.9699 | 11 | 00m02s | 0.61220 | 0.9008 | 24.4009 | 100.0341 | 12 | 00m17s | 0.60972 | 0.8109 | 5.1949 | 999.9955 | 13 | 00m01s | 0.60395 | 0.2883 | 2.0518 | 100.0341 | 14 | 00m16s | 0.61529 | 0.6443 | 24.7840 | 999.9869 | 15 | 00m01s | 0.59926 | 0.3312 | 19.7489 | 100.0013 | RFC: 0.628329
MIT
jupyter_notebooks/machine_learning/ebook_mastering_ml_in_6_steps/Chapter_4_Code/Code/Hyper_Parameter_Tuning.ipynb
manual123/Nacho-Jupyter-Notebooks
Notebook 2: Gradient Descent Learning GoalThe goal of this notebook is to gain intuition for various gradient descent methods by visualizing and applying these methods to some simple two-dimensional surfaces. Methods studied include ordinary gradient descent, gradient descent with momentum, NAG, ADAM, and RMSProp. OverviewIn this notebook, we will visualize what different gradient descent methods are doing using some simple surfaces. From the onset, we emphasize that doing gradient descent on the surfaces is different from performing gradient descent on a loss function in Machine Learning (ML). The reason is that in ML not only do we want to find good minima, we want to find good minima that generalize well to new data. Despite this crucial difference, we can still build intuition about gradient descent methods by applying them to simple surfaces (see related blog posts [here](http://ruder.io/optimizing-gradient-descent/) and [here](http://tiao.io/notes/visualizing-and-animating-optimization-algorithms-with-matplotlib/)). SurfacesWe will consider three simple surfaces: a quadratic minimum of the form $$z=ax^2+by^2,$$ a saddle-point of the form $$z=ax^2-by^2,$$ and [Beale's Function](https://en.wikipedia.org/wiki/Test_functions_for_optimization), a convex function often used to test optimization problems of the form:$$z(x,y) = (1.5-x+xy)^2+(2.25-x+xy^2)^2+(2.625-x+xy^3)^2$$These surfaces can be plotted using the cells below.
#This cell sets up basic plotting functions awe #we will use to visualize the gradient descent routines. #Make plots interactive #%matplotlib notebook #Make plots static %matplotlib inline #Make 3D plots from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm #from matplotlib import animation from IPython.display import HTML from matplotlib.colors import LogNorm #from itertools import zip_longest #Import Numpy import numpy as np #Define function for plotting def plot_surface(x, y, z, azim=-60, elev=40, dist=10, cmap="RdYlBu_r"): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') plot_args = {'rstride': 1, 'cstride': 1, 'cmap':cmap, 'linewidth': 20, 'antialiased': True, 'vmin': -2, 'vmax': 2} ax.plot_surface(x, y, z, **plot_args) ax.view_init(azim=azim, elev=elev) ax.dist=dist ax.set_xlim(-1, 1) ax.set_ylim(-1, 1) ax.set_zlim(-2, 2) plt.xticks([-1, -0.5, 0, 0.5, 1], ["-1", "-1/2", "0", "1/2", "1"]) plt.yticks([-1, -0.5, 0, 0.5, 1], ["-1", "-1/2", "0", "1/2", "1"]) ax.set_zticks([-2, -1, 0, 1, 2]) ax.set_zticklabels(["-2", "-1", "0", "1", "2"]) ax.set_xlabel("x", fontsize=18) ax.set_ylabel("y", fontsize=18) ax.set_zlabel("z", fontsize=18) return fig, ax; def overlay_trajectory_quiver(ax,obj_func,trajectory, color='k'): xs=trajectory[:,0] ys=trajectory[:,1] zs=obj_func(xs,ys) ax.quiver(xs[:-1], ys[:-1], zs[:-1], xs[1:]-xs[:-1], ys[1:]-ys[:-1],zs[1:]-zs[:-1],color=color,arrow_length_ratio=0.3) return ax; def overlay_trajectory(ax,obj_func,trajectory,label,color='k'): xs=trajectory[:,0] ys=trajectory[:,1] zs=obj_func(xs,ys) ax.plot(xs,ys,zs, color, label=label) return ax; def overlay_trajectory_contour_M(ax,trajectory, label,color='k',lw=2): xs=trajectory[:,0] ys=trajectory[:,1] ax.plot(xs,ys, color, label=label,lw=lw) ax.plot(xs[-1],ys[-1],color+'>', markersize=14) return ax; def overlay_trajectory_contour(ax,trajectory, label,color='k',lw=2): xs=trajectory[:,0] ys=trajectory[:,1] ax.plot(xs,ys, color, label=label,lw=lw) return ax; #DEFINE SURFACES WE WILL WORK WITH #Define monkey saddle and gradient def monkey_saddle(x,y): return x**3 - 3*x*y**2 def grad_monkey_saddle(params): x=params[0] y=params[1] grad_x= 3*x**2-3*y**2 grad_y= -6*x*y return [grad_x,grad_y] #Define saddle surface def saddle_surface(x,y,a=1,b=1): return a*x**2-b*y**2 def grad_saddle_surface(params,a=1,b=1): x=params[0] y=params[1] grad_x= a*x grad_y= -1*b*y return [grad_x,grad_y] # Define minima_surface def minima_surface(x,y,a=1,b=1): return a*x**2+b*y**2-1 def grad_minima_surface(params,a=1,b=1): x=params[0] y=params[1] grad_x= 2*a*x grad_y= 2*b*y return [grad_x,grad_y] def beales_function(x,y): return np.square(1.5-x+x*y)+np.square(2.25-x+x*y*y)+np.square(2.625-x+x*y**3) return f def grad_beales_function(params): x=params[0] y=params[1] grad_x=2*(1.5-x+x*y)*(-1+y)+2*(2.25-x+x*y**2)*(-1+y**2)+2*(2.625-x+x*y**3)*(-1+y**3) grad_y=2*(1.5-x+x*y)*x+4*(2.25-x+x*y**2)*x*y+6*(2.625-x+x*y**3)*x*y**2 return [grad_x,grad_y] def contour_beales_function(): #plot beales function x, y = np.meshgrid(np.arange(-4.5, 4.5, 0.2), np.arange(-4.5, 4.5, 0.2)) fig, ax = plt.subplots(figsize=(10, 6)) z=beales_function(x,y) cax = ax.contour(x, y, z, levels=np.logspace(0, 5, 35), norm=LogNorm(), cmap="RdYlBu_r") ax.plot(3,0.5, 'r*', markersize=18) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.set_xlim((-4.5, 4.5)) ax.set_ylim((-4.5, 4.5)) return fig,ax #Make plots of surfaces plt.close() # closes previous plots x, y = np.mgrid[-1:1:31j, -1:1:31j] fig1,ax1=plot_surface(x,y,monkey_saddle(x,y)) fig2,ax2=plot_surface(x,y,saddle_surface(x,y)) fig3,ax3=plot_surface(x,y,minima_surface(x,y,5),0) #Contour plot of Beale's Function fig4,ax4 =contour_beales_function() plt.show()
_____no_output_____
MIT
jupyter_notebooks/notebooks/NB2_CIV-gradient_descent.ipynb
jbRothschild/mlreview_notebooks
Gradient descent with and without momentumIn this notebook, we will visualize various gradient descent algorithms used in machine learning. We will be especially interested in trying to understand how various hyperparameters -- especially the learning rate -- affect our performance. Here, we confine ourselves primarily to looking at the performance in the absence of noise. However, we encourage the reader to experiment with playing with the noise strength below and seeing what differences introducing stochasticity makes. Throughout, we denote the parameters by $\theta$ and the energy function we are trying to minimize by $E(\theta)$.Gradient DescentWe start by considering a simple gradient descent method. In this method,we will take steps in the direction of the local gradient. Given some parameters $\theta$, we adjust the parameters at each iteration so that$$\theta_{t+1}= \theta_t - \eta_t \nabla_\theta E(\theta),$$where we have introduced the learning rate $\eta_t$ that controls how large a step we take. In general, the algorithm is extremely sensitive to the choice of $\eta_t$. If $\eta_t$ is too large, then one can wildly oscillate around minima and miss important structure at small scales. This problem is amplified if our gradient computations are noisy and inexact (as is often the case in machine learning applications). If $\eta_t$ is too small, then the learning/minimization procedure becomes extremely slow. This raises the natural question: What sets the natural scale for the learning rate and how can we adaptively choose it? We discuss this extensively in Section IV of the review.Gradient Descent with MomentumOne problem with gradient descent is that it has no memory of where the "ball rolling down the hill" comes from. This can be an issue when there are many shallow minima in our landscape. If we make an analogy with a ball rolling down a hill, the lack of memory is equivalent to having no inertia or momentum (i.e. completely overdamped dynamics). Without momentum, the ball has no kinetic energy and cannot climb out of shallow minima. Momentum becomes especially important when we start thinking about stochastic gradient descent with noisy, stochastic estimates of the gradient. In this case, we should remember where we were coming from and not react drastically to each new update.Inspired by this, we can add a memory or momentum term to the stochastic gradient descent term above:$$v_{t}=\gamma v_{t-1}+\eta_{t}\nabla_\theta E(\theta_t),\\\theta_{t+1}= \theta_t -v_{t},$$with $0\le \gamma < 1$ called the momentum parameter. When $\gamma=0$, this reduces to ordinary gradient descent, and increasing $\gamma$ increases the inertial contribution to the gradient. From the equations above, we can see that typical memory lifetimes of the gradient is given by $(1-\gamma)^{-1}$. For $\gamma=0$ as in gradient descent, the lifetime is just one step. For $\gamma=0.9$, we typically remember a gradient for ten steps. We will call this gradient descent with classical momentum or CM for short.A final widely used variant of gradient descent with momentum is called the Nesterov accelerated gradient (NAG). In NAG, rather than calculating the gradient at the current position, one calculates the gradient at the position momentum will carry us to at time $t+1$, namely, $\theta_t -\gamma v_{t-1}$. Thus, the update becomes$$v_{t}=\gamma v_{t-1}+\eta_{t}\nabla_\theta E(\theta_t-\gamma v_{t-1})\\\theta_{t+1}= \theta_t -v_{t}$$
#This writes a simple gradient descent, gradient descent+ momentum, #nesterov. #Mean-gradient based methods def gd(grad, init, n_epochs=1000, eta=10**-4, noise_strength=0): #This is a simple optimizer params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0; for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) v=eta*(np.array(grad(params))+noise) params=params-v param_traj[j+1,]=params return param_traj def gd_with_mom(grad, init, n_epochs=5000, eta=10**-4, gamma=0.9,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0 for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) v=gamma*v+eta*(np.array(grad(params))+noise) params=params-v param_traj[j+1,]=params return param_traj def NAG(grad, init, n_epochs=5000, eta=10**-4, gamma=0.9,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0 for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) params_nesterov=params-gamma*v v=gamma*v+eta*(np.array(grad(params_nesterov))+noise) params=params-v param_traj[j+1,]=params return param_traj
_____no_output_____
MIT
jupyter_notebooks/notebooks/NB2_CIV-gradient_descent.ipynb
jbRothschild/mlreview_notebooks
Experiments with GD, CM, and NAGBefore introducing more complicated situations, let us experiment with these methods to gain some intuition.Let us look at the dependence of GD on learning rate in a simple quadratic minima of the form $z=ax^2+by^2-1$. Make plots below for $\eta=0.1,0.5,1,1.01$ and $a=1$ and $b=1$. (to do this, you would have to add additional arguments to the function `gd` above in order to pass the new values of `a` and `b`; otherwise the default values `a=1` and `b=1` will be used by the gradient)What are the qualitatively different behaviors that arise as $\eta$ is increased? What does this tell us about the importance of choosing learning parameters? How do these change if we change $a$ and $b$ above? In particular how does anisotropy change the learning behavior? Make similar plots for CM and NAG? How do the learning rates for these procedures compare with those for GD?
# Investigate effect of learning rate in GD plt.close() a,b = 1.0,1.0 x, y = np.meshgrid(np.arange(-4.5, 4.5, 0.2), np.arange(-4.5, 4.5, 0.2)) fig, ax = plt.subplots(figsize=(10, 6)) z=np.abs(minima_surface(x,y,a,b)) ax.contour(x, y, z, levels=np.logspace(0.0, 5, 35), norm=LogNorm(), cmap="RdYlBu_r") ax.plot(0,0, 'r*', markersize=18) #initial point init1=[-2,4] init2=[-1.7,4] init3=[-1.5,4] init4=[-3,4.5] eta1=0.1 eta2=0.5 eta3=1 eta4=1.01 gd_1=gd(grad_minima_surface,init1, n_epochs=100, eta=eta1) gd_2=gd(grad_minima_surface,init2, n_epochs=100, eta=eta2) gd_3=gd(grad_minima_surface,init3, n_epochs=100, eta=eta3) gd_4=gd(grad_minima_surface,init4, n_epochs=10, eta=eta4) #print(gd_1) overlay_trajectory_contour(ax,gd_1,'$\eta=$%s'% eta1,'g--*', lw=0.5) overlay_trajectory_contour(ax,gd_2,'$\eta=$%s'% eta2,'b-<', lw=0.5) overlay_trajectory_contour(ax,gd_3,'$\eta=$%s'% eta3,'->', lw=0.5) overlay_trajectory_contour(ax,gd_4,'$\eta=$%s'% eta4,'c-o', lw=0.5) plt.legend(loc=2) plt.show() fig.savefig("GD3regimes.pdf", bbox_inches='tight')
_____no_output_____
MIT
jupyter_notebooks/notebooks/NB2_CIV-gradient_descent.ipynb
jbRothschild/mlreview_notebooks
Gradient Descents that utilize the second momentIn stochastic gradient descent, with and without momentum, we still have to specify a schedule for tuning the learning rates $\eta_t$ as a function of time. As discussed in Sec. IV in the context of Newton's method, this presents a number of dilemmas. The learning rate is limited by the steepest direction which can change depending on where in the landscape we are. To circumvent this problem, ideally our algorithm would take large steps in shallow, flat directions and small steps in steep, narrow directions. Second-order methods accomplish this by calculating or approximating the Hessian and normalizing the learning rate by the curvature. However, this is very computationally expensive for extremely large models. Ideally, we would like to be able to adaptively change our step size to match the landscape without paying the steep computational price of calculating or approximating Hessians.Recently, a number of methods have been introduced that accomplish this by tracking not only the gradient but also the second moment of the gradient. These methods include AdaGrad, AdaDelta, RMS-Prop, and ADAM. Here, we discuss the latter of these two as representatives of this class of algorithms.In RMS prop (Root-Mean-Square propagation), in addition to keeping a running average of the first moment of the gradient, we also keep track of the second moment through a moving average. The update rule for RMS prop is given by$$\mathbf{g}_t = \nabla_\theta E(\boldsymbol{\theta}) \\\mathbf{s}_t =\beta \mathbf{s}_{t-1} +(1-\beta)\mathbf{g}_t^2 \nonumber \\\boldsymbol{\theta}_{t+1}=\boldsymbol{\theta}_t + \eta_t { \mathbf{g}_t \over \sqrt{\mathbf{s}_t +\epsilon}}, \nonumber \\$$where $\beta$ controls the averaging time of the second moment and is typically taken to be about $\beta=0.9$, $\eta_t$ is a learning rate typically chosen to be $10^{-3}$, and $\epsilon\sim 10^{-8}$ is a small regularization constant to prevent divergences. It is clear from this formula that the learning rate is reduced in directions where the norm of the gradient is consistently large. This greatly speeds up the convergence by allowing us to use a larger learning rate for flat directions.A related algorithm is the ADAM optimizer. In ADAM, we keep a running average of both the first and second moment of the gradient and use this information to adaptively change the learning rate for different parameters. In addition to keeping a running average of the first and second moments of the gradient, ADAM performs an additional bias correction to account for the fact that we are estimating the first two moments of the gradient using a running average (denoted by the hats in the update rule below). The update rule for ADAM is given by (where multiplication and division are understood to be element wise operations)$$\mathbf{g}_t = \nabla_\theta E(\boldsymbol{\theta}) \\\mathbf{m}_t = \beta_1 \mathbf{m}_{t-1} + (1-\beta_1) \mathbf{g}_t \nonumber \\\mathbf{s}_t =\beta_2 \mathbf{s}_{t-1} +(1-\beta_2)\mathbf{g}_t^2 \nonumber \\\hat{\mathbf{m}}_t={\mathbf{m}_t \over 1-\beta_1} \nonumber \\\hat{\mathbf{s}}_t ={\mathbf{s}_t \over1-\beta_2} \nonumber \\\boldsymbol{\theta}_{t+1}=\boldsymbol{\theta}_t + \eta_t { \hat{\mathbf{m}}_t \over \sqrt{\hat{\mathbf{s}}_t +\epsilon}}, \nonumber $$where $\beta_1$ and $\beta_2$ set the memory lifetime of the first and second moment and are typically take to be $0.9$ and $0.99$ respectively, and $\eta$ and $\epsilon$ are identicalto RMSprop.
################################################################################ # Methods that exploit first and second moments of gradient: RMS-PROP and ADAMS ################################################################################ def rms_prop(grad, init, n_epochs=5000, eta=10**-3, beta=0.9,epsilon=10**-8,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init#Import relevant packages grad_sq=0; for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) g=np.array(grad(params))+noise grad_sq=beta*grad_sq+(1-beta)*g*g v=eta*np.divide(g,np.sqrt(grad_sq+epsilon)) params= params-v param_traj[j+1,]=params return param_traj def adams(grad, init, n_epochs=5000, eta=10**-4, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0): params=np.array(init) param_traj=np.zeros([n_epochs+1,2]) param_traj[0,]=init v=0; grad_sq=0; for j in range(n_epochs): noise=noise_strength*np.random.randn(params.size) g=np.array(grad(params))+noise v=gamma*v+(1-gamma)*g grad_sq=beta*grad_sq+(1-beta)*g*g v_hat=v/(1-gamma) grad_sq_hat=grad_sq/(1-beta) params=params-eta*np.divide(v_hat,np.sqrt(grad_sq_hat+epsilon)) param_traj[j+1,]=params return param_traj
_____no_output_____
MIT
jupyter_notebooks/notebooks/NB2_CIV-gradient_descent.ipynb
jbRothschild/mlreview_notebooks
Experiments with ADAM and RMSpropIn this section, we will experiment with ADAM and RMSprop. To do so, we will use a function commonly used in optimization protocols:$$f(x,y)=(1.5-x+xy)^2+(2.25-x+xy^2)^2+(2.625-x+xy^3)^2.$$This function has a global minimum at $(x,y)=(3,0.5)$. We will use GD, GD with classical momentum, NAG, RMSprop, and ADAM to find minima starting at different initial conditions.One of the things you should experiment with is the learning rate and the number of steps, $N_{\mathrm{steps}}$ we take. Initially, we have set $N_{\mathrm{steps}}=10^4$ and the learning rate for ADAM/RMSprop to $\eta=10^{-3}$ and the learning rate for the remaining methods to $10^{-6}$. Examine the plot for these default values. What do you see? Make a plot when the learning rate of all methods is $\eta=10^{-6}$? How does your plot change? Now set the learning rate for all algorithms to $\eta=10^{-3}$? What goes wrong? Why?
plt.close() #Make static plot of the results Nsteps=10**4 lr_l=10**-3 lr_s=10**-6 init1=np.array([4,3]) fig1, ax1=contour_beales_function() gd_trajectory1=gd(grad_beales_function,init1,Nsteps, eta=lr_s, noise_strength=0) gdm_trajectory1=gd_with_mom(grad_beales_function,init1,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) NAG_trajectory1=NAG(grad_beales_function,init1,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) rms_prop_trajectory1=rms_prop(grad_beales_function,init1,Nsteps,eta=lr_l, beta=0.9,epsilon=10**-8,noise_strength=0) adam_trajectory1=adams(grad_beales_function,init1,Nsteps,eta=lr_l, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) overlay_trajectory_contour_M(ax1,gd_trajectory1, 'GD','k') overlay_trajectory_contour_M(ax1,gd_trajectory1, 'GDM','m') overlay_trajectory_contour_M(ax1,NAG_trajectory1, 'NAG','c--') overlay_trajectory_contour_M(ax1,rms_prop_trajectory1,'RMS', 'b-.') overlay_trajectory_contour_M(ax1,adam_trajectory1,'ADAMS', 'r') plt.legend(loc=2) #init2=np.array([1.5,1.5]) #gd_trajectory2=gd(grad_beales_function,init2,Nsteps, eta=10**-6, noise_strength=0) #gdm_trajectory2=gd_with_mom(grad_beales_function,init2,Nsteps,eta=10**-6, gamma=0.9,noise_strength=0) #NAG_trajectory2=NAG(grad_beales_function,init2,Nsteps,eta=10**-6, gamma=0.9,noise_strength=0) #rms_prop_trajectory2=rms_prop(grad_beales_function,init2,Nsteps,eta=10**-3, beta=0.9,epsilon=10**-8,noise_strength=0) #adam_trajectory2=adams(grad_beales_function,init2,Nsteps,eta=10**-3, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) #overlay_trajectory_contour_M(ax1,gdm_trajectory2, 'GDM','m') #overlay_trajectory_contour_M(ax1,NAG_trajectory2, 'NAG','c--') #overlay_trajectory_contour_M(ax1,rms_prop_trajectory2,'RMS', 'b-.') #overlay_trajectory_contour_M(ax1,adam_trajectory2,'ADAMS', 'r') init3=np.array([-1,4]) gd_trajectory3=gd(grad_beales_function,init3,10**5, eta=lr_s, noise_strength=0) gdm_trajectory3=gd_with_mom(grad_beales_function,init3,10**5,eta=lr_s, gamma=0.9,noise_strength=0) NAG_trajectory3=NAG(grad_beales_function,init3,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) rms_prop_trajectory3=rms_prop(grad_beales_function,init3,Nsteps,eta=lr_l, beta=0.9,epsilon=10**-8,noise_strength=0) adam_trajectory3=adams(grad_beales_function,init3,Nsteps,eta=lr_l, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) overlay_trajectory_contour_M(ax1,gd_trajectory3, 'GD','k') overlay_trajectory_contour_M(ax1,gdm_trajectory3, 'GDM','m') overlay_trajectory_contour_M(ax1,NAG_trajectory3, 'NAG','c--') overlay_trajectory_contour_M(ax1,rms_prop_trajectory3,'RMS', 'b-.') overlay_trajectory_contour_M(ax1,adam_trajectory3,'ADAMS', 'r') init4=np.array([-2,-4]) gd_trajectory4=gd(grad_beales_function,init4,Nsteps, eta=lr_s, noise_strength=0) gdm_trajectory4=gd_with_mom(grad_beales_function,init4,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) NAG_trajectory4=NAG(grad_beales_function,init4,Nsteps,eta=lr_s, gamma=0.9,noise_strength=0) rms_prop_trajectory4=rms_prop(grad_beales_function,init4,Nsteps,eta=lr_l, beta=0.9,epsilon=10**-8,noise_strength=0) adam_trajectory4=adams(grad_beales_function,init4,Nsteps,eta=lr_l, gamma=0.9, beta=0.99,epsilon=10**-8,noise_strength=0) overlay_trajectory_contour_M(ax1,gd_trajectory4, 'GD','k') overlay_trajectory_contour_M(ax1,gdm_trajectory4, 'GDM','m') overlay_trajectory_contour_M(ax1,NAG_trajectory4, 'NAG','c--') overlay_trajectory_contour_M(ax1,rms_prop_trajectory4,'RMS', 'b-.') overlay_trajectory_contour_M(ax1,adam_trajectory4,'ADAMS', 'r') plt.show()
_____no_output_____
MIT
jupyter_notebooks/notebooks/NB2_CIV-gradient_descent.ipynb
jbRothschild/mlreview_notebooks
____ 01-Linear Regression Project____ KeytoDataScience.com Congratulations !!KeytoDataScience just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.__The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract on behalf of KeytoDataScience to help them figure it out!__Let's get started! Just follow the steps below to analyze the customer data (Emails and Addresses in data set are fake). 1 Imports**Import pandas, numpy, matplotlib, and seaborn. (You'll import sklearn as you need it.)** 2 Get the DataWe'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:* Avg. Session Length: Average session of in-store style advice sessions.* Time on App: Average time spent on App in minutes* Time on Website: Average time spent on Website in minutes* Length of Membership: How many years the customer has been a member. **Read in the Ecommerce Customers csv file as a DataFrame called customers.** **Check the head of customers, and check out its info() and describe() methods.** 3 Exploratory Data Analysis**Let's explore the data!**For the rest of the exercise we'll only be using the numerical data of the csv file.**Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense?** **Do the same but with the Time on App column instead.** **Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.** **Let's explore these types of relationships across the entire data set. Use [pairplot](https://stanford.edu/~mwaskom/software/seaborn/tutorial/axis_grids.htmlplotting-pairwise-relationships-with-pairgrid-and-pairplot) to recreate the plot below.(Don't worry about the the colors)** **Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?**
# Length of Membership
_____no_output_____
MIT
Data Science Course/1. Programming/3. Python/Module 8 - Linear Regression/Practice Problem/01-Linear Regression Project.ipynb
tensorbored/career-now-program
3. laboratorijska vježba
# učitavanje potrebnih biblioteka import numpy as np import matplotlib.pyplot as plt import scipy.signal as ss #@title pomoćna funkcija # izvršite ovu ćeliju ali se ne opterećujte detaljima implementacije def plot_frequency_response(f, Hm, fc=None, ylim_min=None): """Grafički prikaz prijenosne funkcije filtra. Args f (numpy.ndarray) : frekvencije Hm (numpy.ndarray) : apsolutne vrijednosti prijenosne funkcije fc (number) : cutoff frekvencija ylim_min (number): minimalna vrijednost na y-osi za dB skalu Returns (matplotlib.figure.Figure, matplotlib.axes._subplots.AxesSubplot) """ Hc = 1 / np.sqrt(2) if fc is None: fc_idx = np.where(np.isclose(Hm, Hc, rtol=1e-03))[0][0] fc = f[fc_idx] H_db = 20 * np.log10(Hm) fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 7.5)) ax[0, 0].plot(f, Hm, label='$H(f)$') ax[0, 0].plot(fc, Hc, 'o', label='$H(f_c)$') ax[0, 0].vlines(fc, Hm.min(), Hc, linestyle='--') ax[0, 0].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={Hc:.3f}$', (fc * 1.4, Hc)) ax[0, 0].set_xscale('log') ax[0, 0].set_ylabel('$|V_{out}$ / $V_{in}$|') ax[0, 0].set_title('log scale') ax[0, 0].legend(loc='lower left') ax[0, 0].grid() ax[0, 1].plot(f, Hm, label='$H(f)$') ax[0, 1].plot(fc, Hc, 'o', label='$H(f_c)$') ax[0, 1].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={Hc:.3f}$', (fc * 1.4, Hc)) ax[0, 1].set_title('linear scale') ax[0, 1].legend() ax[0, 1].grid() ax[1, 0].plot(f, H_db, label='$H_{dB}(f)$') ax[1, 0].plot(fc, H_db.max() - 3, 'o', label='$H_{dB}(f_c)$') ax[1, 0].vlines(fc, H_db.min(), H_db.max() - 3, linestyle='--') ax[1, 0].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={H_db.max() - 3:.3f} dB$', (fc * 1.4, H_db.max() - 3)) ax[1, 0].set_xscale('log') ax[1, 0].set_xlabel('$f$ [Hz]') ax[1, 0].set_ylabel('$20 \\cdot \\log$ |$V_{out}$ / $V_{in}$|') if ylim_min: ax[1, 0].set_ylim((ylim_min, 10)) ax[1, 0].legend(loc='lower left') ax[1, 0].grid() ax[1, 1].plot(f, H_db, label='$H_{dB}(f)$') ax[1, 1].plot(fc, H_db.max() - 3, 'o', label='$H_{dB}(f_c)$') ax[1, 1].annotate(f'$f_c = {fc:.3f}$ Hz\n$H(f_c)={H_db.max() - 3:.3f} dB$', (fc * 1.4, H_db.max() - 3)) ax[1, 1].set_xlabel('$f$ [Hz]') if ylim_min: ax[1, 1].set_ylim((ylim_min, 10)) ax[1, 1].legend() ax[1, 1].grid() fig.tight_layout return fig, ax
_____no_output_____
MIT
emc_512/lab/Python/03-lab-ex.ipynb
antelk/teaching