markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Comments:<br><br> There are 4208 mushroom instances identified as etable and 3916 instances as poisonous. The distribution between my two target classes is approximately the same, which is good - we won't have a selection bias tending towards one category just due to its overrepresentation in the dataset. <br> I will explore other properties of mushrooms and create box or stacked charts to vizualise data.
tbl = pd.crosstab(index=df['classif'], columns=df['cap_colour']) print(tbl) #Create stacked chart and normal box chart to display the distribution of different cap colors by classification fig, axes = plt.subplots(nrows=1, ncols=2) tbl.plot(kind="bar",stacked=False, ax=axes[0], color=['#f0dc82', '#D2691E', '#990000', '#696969','#49311c','#ff69b4', '#007f00','#800080','#ffffff', '#ffff00']); axes[0].legend_.remove() tbl.plot(kind="bar",stacked=True, ax=axes[1], color=['#f0dc82', '#D2691E', '#990000', '#696969','#49311c','#ff69b4', '#007f00','#800080','#ffffff', '#ffff00']); axes[1].legend(['Buff','Cinnamon','Red','Gray','Brown', 'Pink','Green','Purple','White','Yellow'],loc='center left', bbox_to_anchor=(1, 0.5)) fig.suptitle('Cap Color') axes[0].set_ylabel('Species count') plt.show() fig.savefig('cap_color.jpg')
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
Comment: Suprisingly, the cap color is not the great predictor of etability of a mushroom. Different colors have varied distributions among both classes. Just a few species with red and yellow caps tend to be poisonous more often, while brown, gray and white caps are more prevailant among etable mushrooms. However, one cannot draw any definitive conclusion and we need to investgate more charactristics.
gills = pd.crosstab(index=df['classif'], columns=df["gill_color"]) print(gills) #gill-color: black=k,brown=n,buff=b,chocolate=h,gray=g,green=r,orange=o,pink=p,purple=u,red=e, white=w,yellow=y s1 = pd.crosstab(index=df['classif'], columns = df['gill_size']) print(s1)
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
<table cellpadding="0" cellspacing="0" border="0" width = 100%> <tbody> <tr> <td style="text-align:right; width:150px" colspan ="1"><p style="text-align:left"><b>Comments:</b><br><br>Gill sizes seem to be equally distributed among poisonous mushrooms, although we can see that the majority of mushrooms that are etable will have broad gill sizes. <br><br> Colors of gills seem to vary greatly, however, one can notice that only etable mushrooms have red or orange colored gilles - they are safe to eat. Few of poisonous mushrooms have green gills - that could be a very certain identification of a poisonous type (Picture - False Parasol with green gills). Additionally, the most occuring color of gills among poisonous mushrooms is buff - and you can be certain that no etable mushroom will have buff gills as well.</p> <td valign="right" style="width: 170px; " colspan="1"><img style="height:250px;width:350px;" align="top" src=" images/green_gilled_parasol.jpg" alt="Mushrooms" /></td></tr> </tbody> </table>
fig, axes = plt.subplots(nrows=1, ncols=2) plt.tight_layout() gills.plot(kind="bar",stacked=True, ax=axes[1], color=['#f0dc82','#990000','#696969','#4E2E28','#000000', '#49311c','#FFA500','#ff69b4','#007f00','#800080', '#ffffff','#ffff00']) axes[1].legend(['Buff','Red','Gray','Chocolate','Black','Brown','Orange','Pink', 'Green','Purple','White','Yellow'],loc='center left', bbox_to_anchor=(1, 0.5)) axes[1].set_title('Gills Color') axes[0].set_ylabel('Species count') s1.plot(kind="bar",stacked=True, ax=axes[0], color=['#ffd1dc','#d1fff4']) axes[0].set_title('Gills Size') axes[0].legend(['Broad','Narrow'],loc='center') plt.show() fig.savefig('gills_color_size.jpg') odors = pd.crosstab(index=df['classif'], columns=df["odor"]) print(odors) #almond=a,creosote=c,anise=l,fishy=y,foul=f,musty=m,none=n,pungent=p,spicy=s od = odors.plot(kind="bar",figsize=(4,4),stacked=True, cmap=plt.cm.RdYlGn) od.set_title('Odor') od.set_ylabel('Species count') od.set_xlabel('Classification') od.legend(['Almond','Creosote','Foul','Anise', 'Musty','None','Pungent','Spicy','Fishy'], loc='center left', bbox_to_anchor=(1, 0.5)) fig.savefig('odor.jpg')
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
Comments:<br> Odor seems to be very important in differentiating between posonous and not-poisonous mushrooms in our dataset. If the mushroom smells like almond or like anise - it is indeed etable. If there is no smell at all - the mushroom will be most likely etable, but you cannot be sure: out of all mushrooms that don't have a smell, poisonous represent about 3.5 %. Additionally, only poisonous mushrooms smell like fish, spicy, pungent, foul, creosote and musty. Although sometimes difficult to identify, smell is indeed an important feature.
veils = pd.crosstab(index=df['classif'], columns=df["veil_color"]) print(veils) spores = pd.crosstab(index=df['classif'], columns=df["spore_print_color"]) print(spores) fig, axes = plt.subplots(nrows=1, ncols=2) veils.plot(kind="bar",stacked=True, ax=axes[0], color=['#49311c','#ffa500','#ffffff','#ffff00']); axes[0].legend(['Brown','Orange','White','Yellow'],loc='center left') spores.plot(kind="bar",stacked=True, ax=axes[1], color=['#f0dc82','#D2691E','#000000','#49311c', '#ffa500','#007f00','#800080','#ffffff','#ffff00']); axes[1].legend(['Buff','Chocolate','Black','Brown','Orange', 'Green','Purple','White','Yellow'],loc='center left', bbox_to_anchor=(1, 0.5)) axes[0].set_ylabel('Species count') axes[0].set_title('Veils Color') axes[1].set_title('Spores Color') plt.show() fig.savefig('veils_spores_colors.jpg')
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
<table cellpadding="0" cellspacing="0" border="0" width = 100%> <tbody> <tr> <td style="text-align:right; width:150px" colspan ="1"><p style="text-align:left"><b>Comments:</b><br><br>Only etable mushrooms have veils of brown or orange color - you can safely eat those. However, both etable and poisonous mushrooms have white veils. If the veil is yellow - the mushroom will be identified as posinous with 100 % certainty.<br><br>The color of spores provides a better picture - if spores are buff, orange yellow - the species must be etable. If spores arw green - mushroom is poisonous in all of the cases. Most etable mushrooms have brown or black spores and most poisonous ones - white and chocolate. <br><br>It might be a little difficult to distinguish between the brown or chocolate color, however, spore color feature might become statitically important. Additionally, the color disccussed - a 'print' color - is difficult and time-consuming to obtain. One will need to collect a mushroom and leave the cup under the glass for almost a day to get a pattern with a print color.</p> <td valign="right" style="width: 170px; " colspan="1"><img style="height:250px;width:350px;" align="top" src="images/spores.jpg" alt="Mushrooms" /><img style="height:250px;width:350px;" align="bottom" src="images/veil.png" /></td></tr> </tbody> </table>
stalk_above = pd.crosstab(index=df['classif'], columns=df["stalk_color_above_ring"]) print(stalk_above) stalk_below = pd.crosstab(index=df['classif'], columns=df["stalk_color_below_ring"]) print(stalk_below)
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
Comemnts:<br> Stalk colors above and below vary vetween etable and poisonous mushrooms, however, there are rules one can follow to certainly distinguish them using stalk color features. If the stalk color (both above and below the ring) is buff, yellow or cinnamon in color - the mushrom has to be poisonous. All mushrooms that have their stalks red, gray or orange are etable.
fig, axes = plt.subplots(nrows=1, ncols=2) stalk_above.plot(kind="bar",stacked=True, ax=axes[0], color=['#f0dc82','#D2691E', '#990000','#696969', '#49311c','#ffa500','#ff69b4','#ffffff','#ffff00']); axes[0].legend_.remove() stalk_below.plot(kind="bar",stacked=True, ax=axes[1], color=['#f0dc82','#D2691E', '#990000','#696969', '#49311c','#ffa500','#ff69b4','#ffffff','#ffff00']); axes[1].legend(['Buff','Cinnamon','Red','Gray','Brown','Orange','Pink','White','Yellow'],loc='center left', bbox_to_anchor=(1, 0.5)) axes[0].set_ylabel('Species count') axes[0].set_title('Stalk color above ring') axes[1].set_title('Stalk color below ring') plt.show() #fig.savefig('stalk_colors.jpg') hab = pd.crosstab(index=df['classif'], columns=df["habitat"]) print(hab) pop = pd.crosstab(index=df['classif'], columns=df["population"]) print(pop) a = hab.plot(kind="bar",stacked=True,figsize=(3,3), cmap=plt.cm.RdYlGn); a.legend(['Woods','Grasses','Leaves','Meadows', 'Paths','Urban','Waste'],loc='center left',bbox_to_anchor=(1, 0.5)) a.set_ylabel('Species count') a.set_xlabel('Classification') a.set_title('Type of habitat') plt.show() fig.savefig('habitat.jpg') p = pop.plot(kind="bar",stacked=True, figsize=(3,3), cmap=plt.cm.RdYlGn); p.legend(['Abundant','Clustered','Numerous','Scattered', 'Several','Solitary'],loc='center left', bbox_to_anchor=(1, 0.5)) p.set_ylabel('Species count') p.set_xlabel('Classification') p.set_title('Type of population') plt.show() fig.savefig('population.jpg')
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
<table cellpadding="0" cellspacing="0" border="0" width = 100%> <tbody> <tr> <td style="text-align:right; width:150px" colspan ="1"><p style="text-align:left"><b>Comments:</b><br><br>Well, regarding habitat feature, the "good" news ia that no poisonous mushrooms grow in waste. Also, a large proportion of poisonous species grows on paths and barely on meadows. There are more etable mushrooms in woods and grass, but poisonous mushrooms occur in those environments as well. Among leaves you would mostly have poisonous mushrooms.<br> In case you see a 'abundant' amount of mushrooms or 'numerous' of them - they have to be be etable, poisonous once don't grow in those manners. However clustered and scattered and solitary populations occur among poisonous mushrooms as well as in etable ones. If you see just several mushrooms - be careful, most poisonous mushrooms belong to that type of population distribution. Despite its potential relevance, I would have issues distinguishing between population categories, this feature is irrelevant from practical point of view.</p></td> <td valign="right" style="width:170px;" colspan="1"><img src="images/toadstool.jpg" alt="Mushrooms" /></td> </tr> </tbody> </table>
ring = pd.crosstab(index=df['classif'], columns=df["ring_type"]) print(ring) r = ring.plot(kind='bar', cmap=plt.cm.RdYlGn) r.set_xlabel('classification') r.set_ylabel('count') r.set_title("By Ring Type") r.legend(['Evanescent','Flaring','Large','None','Pendant']) plt.show()
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
<table cellpadding="0" cellspacing="0" border="0" width = 100%> <tbody> <tr> <td style="text-align:right; width:150px" colspan ="1"><p style="text-align:left"><b>Comments:</b><br><br>There are less different ring types observed in the sample dataset comparing with metadata information. However, this feature might be important for the classification. Many (but not all!) etable mushrooms have pendant ring type and only etable mushrooms have flaring type. If no ring observed at all, or the ring is large - the mushroom is poisonous! </p></td> <td valign="right" style="width:170px;" colspan="1"><img src="images/ring_type.jpg" alt="Mushrooms" /></td> </tr> </tbody> </table>
#gill-spacing: close=c,crowded=w,distant=d gsp = pd.crosstab(index=df['classif'], columns=df["gill_space"]) print(gsp) r = gsp.plot(kind='bar', cmap=plt.cm.RdYlGn) r.set_xlabel('classification') r.set_ylabel('count') r.set_title("By Gill Spacing") r.legend(['Close','Crowded']) plt.show()
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
<table cellpadding="0" cellspacing="0" border="0" width = 100%> <tbody> <tr> <td style="text-align:right; width:150px" colspan ="1"><p style="text-align:left"><b>Comments:</b><br><br>Metadata defined 3 categories but we have only 2 types of gill spacing observed: close and distant. This feature might be important as there is significantly more etable mushrooms with distant gill spacing.</p></td> <td valign="right" style="width:170px;" colspan="1"><img src="images/gill_spacing.jpeg" alt="Mushrooms" /></td> </tr> </tbody> </table> Recode unique values of different feature Replacement of "1-letter" values with "1-word" values for easier Model Operation application later. <br> The choice of values to recode is not random - it is based on features identified as the most important in the 'feature_selection' notebook as well as my choice of the most 'practical' features. <br>
df['population'].replace(['a','c','n','s','v','y'],['Abundant','Clustered','Numerous', 'Scattered','Several','Solitary'],inplace=True) df['habitat'].replace(['d','g','l','m','p','u','w'],['Woods','Grasses','Leaves', 'Meadows','Paths','Urban','Waste'],inplace=True) df['cap_colour'].replace(['b','c','e','g','n','p','r','u','w','y'],['Buff','Cinnamon','Red','Gray', 'Brown','Pink','Green','Purple', 'White','Yellow'],inplace=True) df['spore_print_color'].replace(['b','h','k','n','o','r','u','w','y'],['Buff','Chocolate','Black','Brown', 'Orange','Green','Purple','White','Yellow'],inplace=True) df['odor'].replace(['a','c','f','l','m','n','p','s','y'],['Almond','Creosote','Foul','Anise','Musty', 'None','Pungent','Spicy','Fishy'],inplace=True) df['gill_color'].replace(['b','e','g','h','k','n','o','p','r','u','w','y'],['Buff','Red','Gray','Chocolate','Black', 'Brown','Orange','Pink','Green','Purple', 'White','Yellow'],inplace=True) df['stalk_surf_above_ring'].replace(['f','k','s','y'],['Fibrous','Silky','Smooth','Scaly'],inplace=True) df['gill_size'].replace(['b','n'],['Broad','Narrow'],inplace=True) df['bruises'].replace(['f','t'],['No','Bruises'],inplace=True) df['stalk_color_above_ring'].replace(['b','c','e','g','n','o','p','w','y'],['Buff','Cinnamon','Red','Gray','Brown', 'Orange','Pink','White','Yellow'],inplace=True) df['stalk_color_below_ring'].replace(['b','c','e','g','n','o','p','w','y'],['Buff','Cinnamon','Red','Gray','Brown', 'Orange','Pink','White','Yellow'],inplace=True) df['gill_space'].replace(['c','w'],['Close', 'Crowded'],inplace=True) df['ring_type'].replace(['e','f','l','n','p'],['Evanescent','Flaring','Large','None','Pendant'],inplace=True) df['classif'].replace(['e','p'],['Etable', 'Poisonous'],inplace=True) df.head()
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
Missing values in stalk_root feature There are 2480 missing values under the feature stalk_root. I will drop the feature from the beginning to avoid droping samples associated with those missing values. Alternatively, it is possible to keep the feature and drop all observations where the value is missing: df = df.drop(df[df['stalk_root']=='?'].index
df['stalk_root'].value_counts()
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
The crosstab display doesn't demonstrate a type a stalk_root that would be prevailent in poisonous mushrooms. If it was the case, I would assign all the missing values with that stalk root type to avoid risks of classifying poisonous mushroom as etable based on tha feature. Another option could be to assign the values of the mode - 'b', but the most common value occures often in both 'e' and 'p' classified mushrooms. Final decision: Drop this feature. After doing some feature analysis, I found out that stalk root qualities were not determinant for distinguishing between etable and poisonous mushrooms.
sr = pd.crosstab(index=df['classif'], columns=df["stalk_root"]) print(sr) df.drop('stalk_root', axis=1, inplace=True)
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
Save data to csv
df.to_csv('/Users/dariaulybina/Desktop/georgetown/ml_practice/data/data.csv')
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function vocab_to_int = {word: ind for ind, word in enumerate(set(text))} int_to_vocab = {ind: word for ind, word in enumerate(set(text))} return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function return {'.':'||Period||', ',':'||Comma||', '"':'||QuotationMark||', ';':'||Semicolon||', '!':'||Exclamationmark||', '?':'||Questionmark||', '(':'||LeftParentheses||', ')':'||RightParentheses||', '--':'||Dash||', "\n":'||Return||'} """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
import tensorflow as tf def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(dtype=tf.int32,shape=[None,None],name='input') target = tf.placeholder(dtype=tf.int32,shape=[None,None],name='target') learning_rate = tf.placeholder(dtype=tf.float32,name='learning_rate') return inputs, target, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size) for _ in range(2)]) # Get initial state initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name='initial_state') return (cell, initial_state) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function output, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name='final_state') return output, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ```
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function n_batches = len(int_text) // (batch_size * seq_length) print(n_batches) result = [] for i in range(n_batches): inputs = [] targets = [] for j in range(batch_size): idx = i * seq_length + j * seq_length inputs.append(int_text[idx:idx + seq_length]) targets.append(int_text[idx + 1:idx + seq_length + 1]) result.append([inputs, targets]) return np.array(result) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 100 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Embedding Dimension Size embed_dim = 200 # Sequence Length seq_length = 25 # Learning Rate learning_rate = 0.02 # Show stats for every n number of batches show_every_n_batches = 50 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function word = int_to_vocab[np.argmax(probabilities)] return word """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
gaoshuming/udacity
mit
Remember that the form of data we will use always is with the "response" as a plain array [1,1,0,0,0,1,0,1,0....]. Your turn: Create a scatter plot of Weight vs. Height Color the points differently by Gender
#your turn plt.scatter(dflog.Weight, dflog.Height, c=[cmap_bold.colors[i] for i in dflog.Gender=="Male"], alpha=0.1);
logistic_regression/Mini_Project_Logistic_Regression.ipynb
farfan92/SpringBoard-
mit
In the Linear Regression Mini Project, the last (extra credit) exercise was to write a K-Fold cross-validation. Feel free to use that code below, or just use the cv_score function we've provided.
from sklearn.model_selection import KFold from sklearn.metrics import accuracy_score def cv_score(clf, x, y, score_func=accuracy_score): result = 0 nfold = 5 kf = KFold(shuffle =False, n_splits= nfold) for train, test in kf.split(x):# split data into train/test groups, 5 times clf.fit(x[train], y[train]) # fit result += score_func(clf.predict(x[test]), y[test]) # evaluate score function on held-out data return result / nfold # average
logistic_regression/Mini_Project_Logistic_Regression.ipynb
farfan92/SpringBoard-
mit
While this looks like a pretty great model, we would like to ensure two things: We have found the best model (in terms of model parameters). The model is highly likely to generalize i.e. perform well on unseen data. For tuning your model, you will use a mix of cross-validation and grid search. In Logistic Regression, the most important parameter to tune is the regularization parameter C. You will now implement some code to perform model tuning. Your turn: Implement the following search procedure to find a good model You are given a list of possible values of C below For each C: Create a logistic regression model with that value of C Find the average score for this model using the cv_score function only on the training set (Xlr,ylr) Pick the C with the highest average score Your goal is to find the best model parameters based only on the training set, without showing the model test set at all (which is why the test set is also called a hold-out set).
#the grid of parameters to search over Cs = [0.001, 0.01, 0.1, 1, 10, 100] score_array = [] #your turn for reg_param in Cs: clf = LogisticRegression(C=reg_param) score = cv_score(clf,Xlr,ylr) score_array.append(score) #score2 = max_score = max(score_array) max_idx = score_array.index(max(score_array)) best_C = Cs[max_idx] print(score_array) print("Best score: ", max_score, ", from C =", best_C) #your turn clf = LogisticRegression(C=best_C) clf.fit(Xlr,ylr) y_prediction = clf.predict(Xtestlr) print("Accuracy score is: ",accuracy_score(y_prediction,ytestlr))
logistic_regression/Mini_Project_Logistic_Regression.ipynb
farfan92/SpringBoard-
mit
Things to think about You may notice that this particular value of C may or may not do as well as simply running the default model on a random train-test split. Do you think that's a problem? Why do we need to do this whole cross-validation and grid search stuff anyway? Not neccesarily. The goal in cross validation is to avoid overfitting. We want to find the best choice of regularization parameter that will allow us to generalize our predictions. So our choice of C is influenced by how well the various parameters perform in the split up training data. We use this to pick the best parameter. The accuracy score at the end compating the predicted values to the true ones is really a test of the choice of MODEL. We would compare this score with alternative models/algorithms. We do these processes to avoid over-fitting. We want to make sure we are fitting our data so as to best make predictions about unseen data. By creating multiple training sets, and looking at performance across each split, we have a better chance at making our model robust to new entries. We note that all the Cs apart from 0.1 onwards returned the same score. Use scikit-learn's GridSearchCV tool Your turn (extra credit): Use scikit-learn's GridSearchCV tool to perform cross validation and grid search. Instead of writing your own loops above to iterate over the model parameters, can you use GridSearchCV to find the best model over the training set? Does it give you the same best value of C? How does this model you've obtained perform on the test set?
#your turn from sklearn.model_selection import GridSearchCV clf2 = LogisticRegression() parameters = {"C": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100]} grid_fit = GridSearchCV(clf2,param_grid=parameters,cv=5, scoring="accuracy") grid_fit.fit(Xlr,ylr) print("Best parameter is: ", grid_fit.best_params_) clf2 = LogisticRegression(C=grid_fit.best_params_['C']) clf2.fit(Xlr,ylr) y_predictions=clf2.predict(Xtestlr) print("Accuracy score is: ", accuracy_score(y_predictions, ytestlr)) print("grid scores were: ", grid_fit.cv_results_['mean_test_score'])
logistic_regression/Mini_Project_Logistic_Regression.ipynb
farfan92/SpringBoard-
mit
We do not obtain the same value of $C$. Here we get that $C=0.001$ is the highest scoring choice. This time we find that the choices from C=1 and upwards score the exact same. Using the gridSearchCV tool, we now get a higher accuracy score using $C=0.001$ at $0.9256$, instead of our previous value of $0.9252$ for $C=0.01$. Recap of the math behind Logistic Regression (optional, feel free to skip) Setting up some code Lets make a small diversion, though, and set some code up for classification using cross-validation so that we can easily run classification models in scikit-learn. We first set up a function cv_optimize which takes a classifier clf, a grid of hyperparameters (such as a complexity parameter or regularization parameter as in the last ) implemented as a dictionary parameters, a training set (as a samples x features array) Xtrain, and a set of labels ytrain. The code takes the traning set, splits it into n_folds parts, sets up n_folds folds, and carries out a cross-validation by splitting the training set into a training and validation section for each foldfor us. It prints the best value of the parameters, and retuens the best classifier to us.
def cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5): gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds) gs.fit(Xtrain, ytrain) print "BEST PARAMS", gs.best_params_ best = gs.best_estimator_ return best
logistic_regression/Mini_Project_Logistic_Regression.ipynb
farfan92/SpringBoard-
mit
We then use this best classifier to fit the entire training set. This is done inside the do_classify function which takes a dataframe indf as input. It takes the columns in the list featurenames as the features used to train the classifier. The column targetname sets the target. The classification is done by setting those samples for which targetname has value target1val to the value 1, and all others to 0. We split the dataframe into 80% training and 20% testing by default, standardizing the dataset if desired. (Standardizing a data set involves scaling the data so that it has 0 mean and is described in units of its standard deviation. We then train the model on the training set using cross-validation. Having obtained the best classifier using cv_optimize, we retrain on the entire training set and calculate the training and testing accuracy, which we print. We return the split data and the trained classifier.
from sklearn.cross_validation import train_test_split def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8): subdf=indf[featurenames] if standardize: subdfstd=(subdf - subdf.mean())/subdf.std() else: subdfstd=subdf X=subdfstd.values y=(indf[targetname].values==target1val)*1 Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size) clf = cv_optimize(clf, parameters, Xtrain, ytrain) clf=clf.fit(Xtrain, ytrain) training_accuracy = clf.score(Xtrain, ytrain) test_accuracy = clf.score(Xtest, ytest) print "Accuracy on training data: %0.2f" % (training_accuracy) print "Accuracy on test data: %0.2f" % (test_accuracy) return clf, Xtrain, ytrain, Xtest, ytest
logistic_regression/Mini_Project_Logistic_Regression.ipynb
farfan92/SpringBoard-
mit
Start with initializing a euclidean N-dimensional algebra and assign our pseudoscalar to $I$, pretty standard.
from clifford import Cl from math import * l,b = Cl(3) # returns (layout,blades). you can change dimesion here I = l.pseudoScalar
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Anti-symmetric This is so easy, $$x \rightarrow x\cdot B$$
B = l.randomIntMV()(2) # we use randIntMV because its easier to read f = lambda x:x|B func2Mat(f,I=I)
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Whats the B? you can read its values straight off the matrix.
B
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Diagonal ( Directional Scaling) A bit awkward this one, but its made by projection onto each basis vector, then scaling the component by some amount. $$ x \rightarrow \sum{\lambda_i (x\cdot e_i) e_i} $$
ls = range(1,len(I.basis())+1) # some dilation values (eigenvalues) A = I.basis() d = lambda x: sum([(x|a)/a*l for a,l in zip(A,ls)]) func2Mat(d,I=I)
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Orthgonal, Rotation $$ x\rightarrow Rx\tilde{R}$$ where $$R=e^{B/2}$$
B = l.randomMV()(2) R = e**(B/2) r = lambda x: R*x*~R func2Mat(r,I=I)
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
The inverse of this is , $$ x\rightarrow \tilde{R}xR $$
rinv = lambda x: ~R*x*R # the inverse rotation func2Mat(rinv,I=I)
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Orthogonal, Reflection $$ x \rightarrow -axa^{-1} $$
a = l.randomIntMV()(1) n = lambda x: -a*x/a func2Mat(n,I=I) a
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Notice the determinant for reflection is -1, and for rotation is +1.
from numpy.linalg import det det(func2Mat(n,I=I)), det(func2Mat(r,I=I))
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Symmetric This can be built up from the functions we just defined ie Rotation*Dilation/Rotation $$ x \rightarrow r(d(r^{-1}(x)) $$ which if you write it out, looks kind of dumb $$ x \rightarrow R[\sum{\lambda_i (\tilde{R}x R\cdot e_i) e_i]R} $$ So, the antisymmetric matrix is interpreted as a set dilations about a some orthogonal frame rotated from the basis (what basis,eh? exactly what basis!). More generally we could include reflection in the $R$ too.
g = lambda x: r(d(rinv(x))) func2Mat(g,I=I)
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Eigen stuffs By definition the eigen-stuff is the invariants of the transformation, sometimes this is a vector, and other times it is a plane. Rotation The eigen blades of a rotation are really the axis and plane of rotation.
from numpy.linalg import eig vals, vecs = eig(func2Mat(r,I=I)) np.round(vecs,3)
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
If you checkout the real column, and compare this to the bivector which generated this rotation (aka the generator), after its been normalized
B/(abs(B)) B vals cos(abs(B)), sin(abs(B))
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Symmetric For the symmetric matrix, the invariant thing the orthonormal frame along which the dilations take place
vals, vecs = eig(func2Mat(g,I=I)) np.round(vecs,5).T
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
This is easily found by using the rotation part of the symmetric operator,
[R*a*~R for a in I.basis()]
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Primitive Visualization in 2D
from pylab import linspace, plot,axis,legend def plot_ps(ps,**kw): x = [p[e1] for p in ps] y = [p[e2] for p in ps] plot(x,y, marker='o',ls='',**kw) l,b = Cl(2) locals().update(b) I = l.pseudoScalar ## define function of interest B = l.randomMV()(2) R = e**(B/2) f = lambda x: R*x*~R ## loop though cartesian grid and apply f, ps,qs=[],[] for x in linspace(-1,1,11): for y in linspace(-1,1,11): p = x*e1 + y*e2 q = f(p) ps.append(p) qs.append(q) plot_ps(ps,label='before') plot_ps(qs,label='after') axis('equal') legend() func2Mat(f,I =I )
docs/tutorials/MatrixRepresentationsOfGeometricFunctions.ipynb
arsenovic/clifford
bsd-3-clause
Simple UAV drag curves We are going to compute induced and parasite drag (in a very basic way) as an example to introduce scripting. \begin{align} q&= \frac{1}{2} \rho V_\infty^2 \ D_i &= \frac{L^2}{q \pi b^2 e} \ D_p &= {C_D}_p q S \end{align} The first few lines make some imports. The first line is only for Jupyter notebooks. It just tells the notebook to show plots inline. You wouldn't need that in a normal python script.
from math import pi # all in standard English units # atmosphere rho = 0.0024 # air density # geometry b = 8.0 # wing span chord = 1.0 # mass properties W = 2.4 # total weight of aircraft # other parameters e = 0.9 # Oswald efficiency factor CDp = 0.02 # could compute but just input for simplicity # an array of wind speeds V = np.linspace(10, 30, 100) # Induced drag q = 0.5*rho*V**2 # dyamic pressure L = W # equilibrium flight Di = L**2/(q*pi*b**2*e) # parasite drag S = b*chord Dp = CDp*q*S # these next 3 lines purely for style in the plot (loading a predefined styleshet) # I have my own custom styles I use, but for this example let's use one of matplotlib's plt.style.use('ggplot') plt.rcParams.update({'font.size': 16}) colors = plt.rcParams['axes.color_cycle'] # grab the current color scheme # plot it plt.figure() plt.plot(V, Di) plt.plot(V, Dp) plt.plot(V, Di+Dp) plt.xlabel('V (ft/s)') plt.ylabel('Drag (lbs)') # label the plots plt.text(25, 0.06, 'induced drag', color=colors[0]) plt.text(12, 0.06, 'parasite drag', color=colors[1]) plt.text(20, 0.17, 'total drag', color=colors[2])
onboarding/PythonPrimer.ipynb
BYUFLOWLab/BYUFLOWLab.github.io
mit
Try it yourself. Let's do the same calculation, but with reusable functions. In Python functions are easy to define. A simple example is below (note that unlike Matlab, you can have as many functions as you want in a file.)
def func(x, y): add = x + y mult = x * y return add, mult a, m = func(1.0, 3.0) print 'a =', a, 'm =', m a, m = func(2.0, 7.0) print 'a =', a, 'm =', m def induced_drag(): pass def parasite_drag(): pass # atmosphere rho = 0.0024 # air density # geometry b = 8.0 # wing span chord = 1.0 # mass properties W = 2.4 # total weight of aircraft # other parameters e = 0.9 # Oswald efficiency factor CDp = 0.02 # could compute but just input for simplicity # wind speeds V = np.linspace(10, 30, 100)
onboarding/PythonPrimer.ipynb
BYUFLOWLab/BYUFLOWLab.github.io
mit
Finally, let's do it once more, but in an object-oriented style.
class UAV(object): def __init__(self, b, chord, W, rho): self.b = b self.S = b*chord self.L = W self.rho = rho def induced_drag(self, V, e): q = 0.5*self.rho*V**2 Di = self.L**2/(q*pi*self.b**2*e) return Di def parasite_drag(self, V, CDp): q = 0.5*self.rho*V**2 Dp = CDp*q*self.S return Dp # atmosphere rho = 0.0024 # air density # geometry b = 8.0 # wing span chord = 1.0 # mass properties W = 2.4 # total weight of aircraft # setup UAV object uav = UAV(b, chord, W, rho) # setup sweep V = np.linspace(10, 30, 100) # idrag e = 0.9 # Oswald efficiency factor Di = uav.induced_drag(V, e) # pdrag CDp = 0.02 Dp = uav.parasite_drag(V, CDp) # style plt.style.use('fivethirtyeight') # plot it plt.figure() plt.plot(V, Di) plt.plot(V, Dp) plt.plot(V, Di+Dp) plt.xlabel('V (ft/s)') plt.ylabel('Drag (lbs)') # label the plots colors = plt.rcParams['axes.color_cycle'] plt.text(25, 0.06, 'induced drag', color=colors[0]) plt.text(12, 0.06, 'parasite drag', color=colors[1]) plt.text(20, 0.17, 'total drag', color=colors[2])
onboarding/PythonPrimer.ipynb
BYUFLOWLab/BYUFLOWLab.github.io
mit
Wrapping Fortran/C This next example is going to solve Laplace's equation on a grid. First, we will do it in pure Python, then we will rewrite a portion of the code in Fortran and call it in Python for improved speed. Recall Laplace's equation: $$ \nabla^2 \phi = 0 $$ where $\phi is some scalar. For a regular rectangular grid, with equal spacing in x and y, you might recall that a simple iterative method for solving this equation consists of the following update rule: $$ \phi_{i, j} = \frac{1}{4} (\phi_{i+1, j} + \phi_{i-1, j} + \phi_{i, j+1} + \phi_{i, j-1})$$ In other words, each cell updates its value using the average value of all of its neighbors (note that they are much more efficient ways to solve Laplace's equation on a grid. For our purpose we just want to keep things simple). This process must be repeated for every cell in the domain, and repeated until converged. We are going to run a simple case where boundary values are provided at the top, bottom, left, and right edges. You should iterate until the maximum change in $\phi$ is below some tolerance (tol) or until a maximum number of iterations is reached (iter_max). n is the number of cells (same discretization in x and y). I've started a script for you below. See if you can fill in the details. I've not provided all the syntax you will need to know so you may have to look some things up. A full implementation is down below, but don't peek unless you are really stuck!
from math import fabs def laplace_grid_python(n, top, bottom, left, right, tol, iter_max): # initialize phi = np.zeros((n+1, n+1)) iters = 0 # number of iterations err_max = 1e6 # maximum error in grid (start at some arbitrary number just to enter loop) # set boundary conditions # run while loop until tolerance reached or max iterations while (): # reset the maximum error to something small (I suggest something like -1) err_max = -1.0 # loop over all *interior* cells for i in range(): for j in range(): # save previous point for computing error later phi_prev = phi[i, j] # update point phi[i, j] = # update maximum error err_max = # update iteration count iters += 1 return phi, err_max, iters # run a sample case (50 x 50 grid with bottom and left and 1.0, top and right at 0.0) n = 50 top = 0.0 bottom = 1.0 left = 1.0 right = 0.0 tol = 1e-5 iter_max = 10000 phi, err_max, iters = laplace_grid_python(n, top, bottom, left, right, tol, iter_max) # plot it x = np.linspace(0, 1, n+1) y = np.linspace(0, 1, n+1) [X, Y] = np.meshgrid(x, y) plt.figure() plt.contourf(X, Y, phi, 100, cmap=plt.cm.get_cmap("YlGnBu")) plt.colorbar() plt.show()
onboarding/PythonPrimer.ipynb
BYUFLOWLab/BYUFLOWLab.github.io
mit
In our implementation below we are adding the IPython flag %%timeit at the top. This will run the whole block some number of times and report the best time back. Adding some blank space below just to visually separate the answer. <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br>
%%timeit from math import fabs def laplace_grid_python(n, top, bottom, left, right, tol, iter_max): # initialize phi = np.zeros((n+1, n+1)) iters = 0 # number of iterations err_max = 1e6 # maximum error in grid (start at some arbitrary number just to enter loop) # set boundary conditions phi[0, :] = bottom phi[-1, :] = top phi[:, 0] = left phi[:, -1] = right # run while loop until tolerance reached or max iterations while (err_max > tol and iters < iter_max): # reset the maximum error to something small (I suggest something like -1) err_max = -1.0 # loop over all interior cells for i in range(1, n): for j in range(1, n): # save previous point phi_prev = phi[i, j] # update point phi[i, j] = (phi[i-1,j] + phi[i+1,j] + phi[i,j-1] + phi[i,j+1])/4.0 # update maximum error err_max = max(err_max, fabs(phi[i, j] - phi_prev)) # update iteration count iters += 1 return phi, err_max, iters # run a sample case (50 x 50 grid with bottom and left and 1.0, top and right at 0.0) n = 50 top = 0.0 bottom = 1.0 left = 1.0 right = 0.0 tol = 1e-5 iter_max = 10000 phi, err_max, iters = laplace_grid_python(n, top, bottom, left, right, tol, iter_max) # plot it x = np.linspace(0, 1, n+1) y = np.linspace(0, 1, n+1) [X, Y] = np.meshgrid(x, y) plt.figure() plt.contourf(X, Y, phi, 100, cmap=plt.cm.get_cmap("YlGnBu")) plt.colorbar() plt.show()
onboarding/PythonPrimer.ipynb
BYUFLOWLab/BYUFLOWLab.github.io
mit
This takes a while. Let's move the double for loop computation to Fortran. I've supplied a file called laplace.f90 where I've done this for you. We just need to build this as a shared library so we can call it from Python. Open a terminal (you can do this in try.jupyter.org as well). We will compile the fortran code to a shared library with f2py, (can also use standard gfortran compilation but you get more flexibility and automatic setup with f2py). In all of the below I am using an O2 optimization flag. Note the underscore in the shared library name. This is just convention. Note that if you make a mistake in importing, IPython caches your modules so you'd need to restart the Kernel. You can even do all of this <try.jupyter.org> by opening a terminal. Using f2py f2py -c --opt=-O2 -m _laplace laplace.f90 Using setup script. Open up a file and call it setup.py. At a minimum this is all it needs: from numpy.distutils.core import setup, Extension setup( ext_modules=[Extension('_laplace', ['laplace.f90'], extra_compile_args=['-O2'])] ) Usually a lot more information would be added (name, license, other python packages, etc.) You can read more about setuptools later. To build it one normally just uses build or install commands, but we will built it inplace for testing. python setup.py build_ext --inplace Now we can call it from Python just as a regular method. An example is shown below doing the exact same thing as before, but calling the Fortran code in laplacegridfortran. This runs over 10x faster (and could be even faster if we used ifort instead of gfortran). How much faster the code is of course depends ont he problem as you change n the difference will either become more or less significant.
%%timeit from _laplace import laplacegridfortran n = 50 top = 0.0 bottom = 1.0 left = 1.0 right = 0.0 tol = 1e-5 iter_max = 10000 phi, err_max, iters = laplacegridfortran(n, top, bottom, left, right, tol, iter_max) # plot it x = np.linspace(0, 1, n+1) y = np.linspace(0, 1, n+1) [X, Y] = np.meshgrid(x, y) plt.figure() plt.contourf(X, Y, phi, 100, cmap=plt.cm.get_cmap("YlGnBu")) plt.colorbar() plt.show()
onboarding/PythonPrimer.ipynb
BYUFLOWLab/BYUFLOWLab.github.io
mit
Loading mesh. This mesh is rotated, so we use default values. If your mesh is rotated don't forget to use abg parameter.
meshpath ='/mnt/lustre01/work/ab0995/a270088/data/core_mesh/' mesh = pf.load_mesh(meshpath, usepickle=False, usejoblib=True)
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Open multiple files at once. Please have a look at this page to understand what chinks are for.
data = xr.open_mfdataset('/work/ab0995/a270067/fesom_echam/core/cpl_output_02/fesom.200?.oce.mean.nc', chunks={'time': 12})
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Now you have a Dataset that have all the data in.
data
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
You can see that in this version of fesom output there is a bug with shifter time stemps (times starts from '2000-02-01'). We are giong to fix it. Create time stamps with pandas:
dates = pd.date_range('2000','2010', freq='M', ) dates
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Note, that you have to put one year more in this case since the right boundary is not included. Now replace time stamps in the data by the right ones:
data.time.data = dates data
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Good, we now have right time stamps and can work with time. Time mean The time mean over the whole time period is simple:
temp_mean = data.temp.mean(dim='time')
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Here we select temp variable and apply mean to it. You also have to specify the dimention (dim) that you want to make a mean over. You probably noticed that "computation" was performed very quickly. This is because there were now computation at all, just preparation for it. To actually do computation do:
temp_mean = temp_mean.compute() temp_mean
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Mean over time slice One can use slices to select data over some time period:
data.temp.sel(time=slice('2000-01-01', '2003-12-31')).time
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Mean over this slice will look like this:
temp_mean_3years = data.temp.sel(time=slice('2000-01-01', '2003-12-31')).mean(dim='time') temp_mean_3years = temp_mean_3years.compute()
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Mean over specific month !!! PLESE READ THIS TO GET MORE INFORMATION !!! Our data are monthly:
data.time[:14]
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
The sel allows to provide explicit time steps, so if we just select only March values:
data.time[2::12]
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
we can provide this values directly to sel. We also make a mean over the selected time and do the computation:
temp_march_mean = data.temp.sel(time=data.time[2::12]).mean(dim='time') temp_march_mean = temp_march_mean.compute()
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
The xarray have more explicit syntax to select months (returns array that show which record in your array corespond to each month):
data['time.month']
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Using this sysntax you can easily select March:
data.temp[data['time.month']==3]
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
And make a mean over this month:
temp_march_mean = data.temp[data['time.month']==3].mean(dim='time') temp_march_mean = temp_march_mean.compute()
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Please have a look at this page to see what "datetime components" are supported. At the time of this writing the list contains: “year”, “month”, “day”, “hour”, “minute”, “second”, “dayofyear”, “week”, “dayofweek”, “weekday” and “quarter”. Additional xarray component is season. You can select winter temperature values and average over them by:
temp_DJF_mean = data.temp[data['time.season']=='DJF'].mean(dim='time') temp_DJF_mean = temp_DJF_mean.compute()
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Resampling Once again, please read this page to get more information. If we would like to resample our data, making yearly means, the way to do it is:
yearly_data = data.resample(time='1A').mean(dim='time') yearly_data yearly_data = yearly_data.compute()
notebooks/big_data_processing.ipynb
FESOM/pyfesom
mit
Migrating tf.summary usage to TF 2.x <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tensorboard/migrate"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/migrate.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/migrate.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorboard/docs/migrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: This doc is for people who are already familiar with TensorFlow 1.x TensorBoard and who want to migrate large TensorFlow code bases from TensorFlow 1.x to 2.x. If you're new to TensorBoard, see the get started doc instead. If you are using tf.keras there may be no action you need to take to upgrade to TensorFlow 2.x.
import tensorflow as tf
site/en-snapshot/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
TensorFlow 2.x includes significant changes to the tf.summary API used to write summary data for visualization in TensorBoard. What's changed It's useful to think of the tf.summary API as two sub-APIs: A set of ops for recording individual summaries - summary.scalar(), summary.histogram(), summary.image(), summary.audio(), and summary.text() - which are called inline from your model code. Writing logic that collects these individual summaries and writes them to a specially formatted log file (which TensorBoard then reads to generate visualizations). In TF 1.x The two halves had to be manually wired together - by fetching the summary op outputs via Session.run() and calling FileWriter.add_summary(output, step). The v1.summary.merge_all() op made this easier by using a graph collection to aggregate all summary op outputs, but this approach still worked poorly for eager execution and control flow, making it especially ill-suited for TF 2.x. In TF 2.X The two halves are tightly integrated, and now individual tf.summary ops write their data immediately when executed. Using the API from your model code should still look familiar, but it's now friendly to eager execution while remaining graph-mode compatible. Integrating both halves of the API means the summary.FileWriter is now part of the TensorFlow execution context and gets accessed directly by tf.summary ops, so configuring writers is the main part that looks different. Example usage with eager execution, the default in TF 2.x:
writer = tf.summary.create_file_writer("/tmp/mylogs/eager") with writer.as_default(): for step in range(100): # other model code would go here tf.summary.scalar("my_metric", 0.5, step=step) writer.flush() ls /tmp/mylogs/eager
site/en-snapshot/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
Example usage with tf.function graph execution:
writer = tf.summary.create_file_writer("/tmp/mylogs/tf_function") @tf.function def my_func(step): with writer.as_default(): # other model code would go here tf.summary.scalar("my_metric", 0.5, step=step) for step in tf.range(100, dtype=tf.int64): my_func(step) writer.flush() ls /tmp/mylogs/tf_function
site/en-snapshot/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
Example usage with legacy TF 1.x graph execution:
g = tf.compat.v1.Graph() with g.as_default(): step = tf.Variable(0, dtype=tf.int64) step_update = step.assign_add(1) writer = tf.summary.create_file_writer("/tmp/mylogs/session") with writer.as_default(): tf.summary.scalar("my_metric", 0.5, step=step) all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops() writer_flush = writer.flush() with tf.compat.v1.Session(graph=g) as sess: sess.run([writer.init(), step.initializer]) for i in range(100): sess.run(all_summary_ops) sess.run(step_update) sess.run(writer_flush) ls /tmp/mylogs/session
site/en-snapshot/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
Converting your code Converting existing tf.summary usage to the TF 2.x API cannot be reliably automated, so the tf_upgrade_v2 script just rewrites it all to tf.compat.v1.summary and will not enable the TF 2.x behaviors automatically. Partial Migration To make migration to TF 2.x easier for users of model code that still depends heavily on the TF 1.x summary API logging ops like tf.compat.v1.summary.scalar(), it is possible to migrate only the writer APIs first, allowing for individual TF 1.x summary ops inside your model code to be fully migrated at a later point. To support this style of migration, <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/summary"><code>tf.compat.v1.summary</code></a> will automatically forward to their TF 2.x equivalents under the following conditions: The outermost context is eager mode A default TF 2.x summary writer has been set A non-empty value for step has been set for the writer (using <a href="https://www.tensorflow.org/api_docs/python/tf/summary/SummaryWriter#as_default"><code>tf.summary.SummaryWriter.as_default</code></a>, <a href="https://www.tensorflow.org/api_docs/python/tf/summary/experimental/set_step"><code>tf.summary.experimental.set_step</code></a>, or alternatively <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/create_global_step"><code>tf.compat.v1.train.create_global_step</code></a>) Note that when TF 2.x summary implementation is invoked, the return value will be an empty bytestring tensor, to avoid duplicate summary writing. Additionally, the input argument forwarding is best-effort and not all arguments will be preserved (for instance family argument will be supported whereas collections will be removed). Example to invoke <a href="https://www.tensorflow.org/api_docs/python/tf/summary/scalar"><code>tf.summary.scalar</code></a> behaviors in <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/summary/scalar"><code>tf.compat.v1.summary.scalar</code></a>:
# Enable eager execution. tf.compat.v1.enable_v2_behavior() # A default TF 2.x summary writer is available. writer = tf.summary.create_file_writer("/tmp/mylogs/enable_v2_in_v1") # A step is set for the writer. with writer.as_default(step=0): # Below invokes `tf.summary.scalar`, and the return value is an empty bytestring. tf.compat.v1.summary.scalar('float', tf.constant(1.0), family="family")
site/en-snapshot/tensorboard/migrate.ipynb
tensorflow/docs-l10n
apache-2.0
Set up Set the number of points desired for the averaging and length of spectra. Also set up seaborn formats.
# Processings parameters spfreq=50e3 # Bandwidth nspec=256 # length of spectrum rep1=10000 # number of pulses L=24. # Length of pulse in standard processings pulse=sp.ones(int(L)) # Pulse for standard processing pulse_pergram=sp.ones(nspec) # For periodogram Nrg=128 # Number of Range gates for data # Parameters for spectrum species=['O+','e-'] databloc = sp.array([[1.66e10,1e3],[1.66e10,2.5e3]]) f_c = 440e6 # setup seaborne sns.set_style("whitegrid") sns.set_context("notebook")
ExampleNotebooks/SpecEstimator.ipynb
jswoboda/SimISR
mit
IS Spectra This will create an example ISR spectra that will be used.
# Make spectrum ISpec_ion = ISRSpectrum(centerFrequency=f_c, nspec=nspec, sampfreq=spfreq, dFlag=False) f,cur_spec,rcs = ISpec_ion.getspecsep(databloc,species,rcsflag=True) specsum = sp.absolute(cur_spec).sum() cur_spec = len(cur_spec)*cur_spec*rcs/specsum tau,acf=spect2acf(f,cur_spec) fig,ax = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w') rp,imp=ax[0].plot(tau*1e3,acf.real,tau*1e3,acf.imag) ax[0].legend([rp,imp],['Real','Imag']) ax[0].set_ylabel('Amplitude') ax[0].set_title('ACF') ax[0].set_xlabel(r'$\tau$ in ms') ax[1].plot(f*1e-3,cur_spec.real) ax[1].set_ylabel('Amplitude') ax[1].set_title('Spectrum') ax[1].set_xlabel(r'f in kHz') fig.tight_layout()
ExampleNotebooks/SpecEstimator.ipynb
jswoboda/SimISR
mit
White Noise A periodogram is applied to complex white Gaussian Noise. This is here in order to show that the output of the scipy random number generator outputs uncorrelated random variables.
xin =sp.power(2,-.5)*(sp.random.randn(rep1,nspec)+1j*sp.random.randn(rep1,nspec)) Xfft=sp.power(nspec,-.5)*scfft.fftshift(scfft.fft(xin,axis=-1),axes=-1) Xperiod=sp.power(Xfft.real,2).mean(0) +sp.power(Xfft.imag,2).mean(0) tau2,acfperiod=spect2acf(f,Xperiod*nspec) fig2,ax2 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w') rp,imp=ax2[0].plot(tau2*1e6,acfperiod.real,tau2*1e6,acfperiod.imag) ax2[0].legend([rp,imp],['Real','Imag']) ax2[0].set_ylabel('Amplitude') ax2[0].set_title('ACF') ax2[0].set_xlabel(r'$\tau$ in $\mu$s') ax2[1].plot(f*1e-3,Xperiod.real) ax2[1].set_ylabel('Amplitude') ax2[1].set_title('Spectrum') ax2[1].set_xlabel(r'f in kHz') ax2[1].set_ylim([0.,1.5]) fig2.tight_layout()
ExampleNotebooks/SpecEstimator.ipynb
jswoboda/SimISR
mit
Shaped Noise A set of shaped noise is created using the IS spectrum formed earlier. Using linear pridictive coding to apply the spectrum to the noise through the MakePulseDataRepLPC function. This is similar to the method used by vocoders to encode human speech. To show the effect of the LPC coloring a periodogram esitmator is applied to noise the noise.
Xdata = MakePulseDataRepLPC(pulse_pergram,cur_spec,30,rep1,numtype = sp.complex128) Xfftd=sp.power(nspec,-.5)*scfft.fftshift(scfft.fft(Xdata,axis=-1),axes=-1) Xperiodd=sp.power(Xfftd.real,2).mean(0) +sp.power(Xfftd.imag,2).mean(0) tau3,acfperiodd=spect2acf(f,Xperiodd*nspec) fig3,ax3 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w') rp,imp=ax3[0].plot(tau3*1e6,acfperiodd.real,tau3*1e6,acfperiodd.imag) ax3[0].legend([rp,imp],['Real','Imag']) ax3[0].set_ylabel('Amplitude') ax3[0].set_title('ACF') ax3[0].set_xlabel(r'$\tau$ in $\mu$s') ax3[1].plot(f*1e-3,Xperiodd.real) ax3[1].set_ylabel('Amplitude') ax3[1].set_title('Spectrum') ax3[1].set_xlabel(r'f in kHz') fig2.tight_layout()
ExampleNotebooks/SpecEstimator.ipynb
jswoboda/SimISR
mit
Window Function When a long pulse is used in ISR the ACF is esitmated instead of the spectrum directly through the periodogram estimator. The estimation is a two step process, first estimate the lags and then do a summation rule. This leads to a windowing of the ACF shown in this. The window is also shown in the frequency domain which will be applied as a convolution to the original spectra in frequency.
v=1 l=sp.arange(L) W=-l**2/(L*v) + (L-v)*l/L/v+1 Wp=sp.pad(W,(int(sp.ceil(float(nspec-L)/2)),int(sp.floor(float(nspec-L)/2))),'constant',constant_values=0) wfft=scfft.fftshift(scfft.fft(W,n=nspec)) fig4,ax4 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w') ax4[0].plot(l,W) ax4[0].set_ylabel('Weighting') ax4[0].set_title('Weighting') ax4[0].set_xlabel(r'$l$') rp,imp,abp=ax4[1].plot(f*1e-3,wfft.real,f*1e-3,wfft.imag,f*1e-3,sp.absolute(wfft)) ax4[1].legend([rp,imp,abp],['Real','Imag','Abs']) ax4[1].set_ylabel('Amplitude') ax4[1].set_title('Spectrum') ax4[1].set_xlabel(r'f in kHz') fig4.tight_layout()
ExampleNotebooks/SpecEstimator.ipynb
jswoboda/SimISR
mit
Full ISR Data Creation and Estimator The basics data creation and processing behind SimISR for this case for only one beam. The data is created along a set of samples by adding together a set of uncorrelated data sets together. These sets of pulses that are added together are uncorrelated because any spatial correlation of the electron density fluctuations are much smaller than a range gate. After that the ACFs are estimated they are plotted with the input ACF and spectra with the window applied.
Xdata=sp.zeros((rep1,Nrg),dtype=sp.complex128) Lint = int(L) for i in range(int(Nrg-Lint)): Xdata[:,i:i+Lint] = MakePulseDataRepLPC(pulse,cur_spec,40,rep1,numtype = sp.complex128)+Xdata[:,i:i+Lint] lagsData=CenteredLagProduct(Xdata,numtype=sp.complex128,pulse =pulse,lagtype='centered')/rep1 ptype='long' ts = 1. sumrule=makesumrule(ptype,L,ts,lagtype='centered') minrg = -sumrule[0].min() maxrg = Nrg-sumrule[1].max() Nrng2 = maxrg-minrg lagsDatasum = sp.zeros((Nrng2,Lint),dtype=sp.complex128) for irngnew,irng in enumerate(sp.arange(minrg,maxrg)): for ilag in range(Lint): lagsDatasum[irngnew,ilag] = lagsData[irng+sumrule[0,ilag]:irng+sumrule[1,ilag]+1,ilag].sum(axis=0) # divide off the gain from the pulse stacking lagsDatasum = lagsDatasum/L
ExampleNotebooks/SpecEstimator.ipynb
jswoboda/SimISR
mit
Plotting and Normalization of Input Spectra Need to apply window function to spectrum.
dt=tau[1]-tau[0] f1,spec_all=acf2spect(l*dt,lagsDatasum,n_s=nspec) acf_single = lagsDatasum[50] spec_single = spec_all[50] # Apply weighting and integrations from gain from pulse stacking acf_act=scfft.ifftshift(acf)[:Lint]*W feh,spec_act=acf2spect(l*dt,acf_act,n_s=nspec) fig5,ax5 = plt.subplots(1,2,sharey=False, figsize=(8,4),facecolor='w') rp,imp,act_acf=ax5[0].plot(l*dt*1e6,acf_single.real,l*dt*1e6,acf_single.imag,l*dt*1e6,acf_act.real) ax5[0].legend([rp,imp,act_acf],['Real','Imag','Actual']) ax5[0].set_ylabel('Amplitude') ax5[0].set_title('ACF') ax5[0].set_xlabel(r'$\tau$ in $\mu$s') est1,act_spec=ax5[1].plot(f*1e-3,spec_single.real,f*1e-3,spec_act.real) ax5[1].legend([est1,act_spec],['Estimated','Actual']) ax5[1].set_ylabel('Amplitude') ax5[1].set_title('Spectrum') ax5[1].set_xlabel(r'f in kHz') fig5.tight_layout()
ExampleNotebooks/SpecEstimator.ipynb
jswoboda/SimISR
mit
Create lookup table Let's create a lookup table of embeddings. We'll use the comments field of a storm reports table from NOAA. This is an example of the Feature Store design pattern.
%%bigquery CREATE OR REPLACE TABLE advdata.comments_embedding AS SELECT output_0 as comments_embedding, comments FROM ML.PREDICT(MODEL advdata.swivel_text_embed,( SELECT comments, LOWER(comments) AS sentences FROM `bigquery-public-data.noaa_preliminary_severe_storms.wind_reports` ))
02_data_representation/text_embeddings.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Modeling the data The data appears to follow very closely a sigmoid function (S-shaped curve). Logically, it makes sense: by the time the outbreak is discovered, there are many undiagnosed (and even asymptomatic) cases which lead to very rapid initial growth; later on, after a combination of aggressive measures to avoid further spread and immunity developed by potential hosts, the growth becomes much slower. Let's see if we can model it using some parameter fitting:
import math import numpy as np from scipy import optimize def logistic_function(x: float, a: float, b: float, c: float): ''' 1 / (1 + e^-x) ''' return a / (1.0 + np.exp(-b * (x - c))) X, y = list(range(len(df))), df['total_confirmed'].tolist() # Providing a reasonable initial guess is crucial for this model params, _ = optimize.curve_fit(logistic_function, X, y, maxfev=int(1E5), p0=[max(y), 1, np.median(X)]) print('Estimated function: {0:.3f} / (1 + e^({1:.3f} * (X - {2:.3f}))'.format(*params)) confirmed = df[['total_confirmed']].rename(columns={'total_confirmed': 'Ground Truth'}) ax = confirmed.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params) for x in X] ax.plot(df.index, estimate, color='red', label='Estimate') ax.legend();
examples/logistic_modeling.ipynb
GoogleCloudPlatform/covid-19-open-data
apache-2.0
Gompertz function While the simple logistic function provides a reasonably good fit, it appears to under-estimate the growth rate after the initial outbreak. A better fit might be Gompertz function which is an asymmetric logistic function that has a slower growth decay until the curve goes flat over time. Let's take a look at using this new function to find the best paremeters that fit the data:
def logistic_function(x: float, a: float, b: float, c: float): ''' a * e^(-b * e^(-cx)) ''' return a * np.exp(-b * np.exp(-c * x)) X, y = list(range(len(df))), df['total_confirmed'].tolist() # Providing a reasonable initial guess is crucial for this model params, _ = optimize.curve_fit(logistic_function, X, y, maxfev=int(1E5), p0=[max(y), np.median(X), .1]) print('Estimated function: {0:.3f} * e^(-{1:.3f} * e^(-{2:.3f}X))'.format(*params)) confirmed = df[['total_confirmed']].rename(columns={'total_confirmed': 'Ground Truth'}) ax = confirmed.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params) for x in X] ax.plot(df.index, estimate, color='red', label='Estimate') ax.legend();
examples/logistic_modeling.ipynb
GoogleCloudPlatform/covid-19-open-data
apache-2.0
Evaluating the model That curve looks like a very good fit! Traditional epidemiology models generally capture a number of different parameters representing biology and social factors; however, the COVID-19 pandemic might be very challenging to fit for traditional models for a number of reasons: * It's a completely new disease, never seen before * Unprecedented, very aggressive measures have been taken by many nations to try to stop the spread of the disease * Testing has been held back by a combination of shortage of tests and political reasons If a known model is not being used, then a simpler model is more likely to be a better fit; too many parameters have a tendency to overfit the data which diminishes the model's ability to make predictions. In other words, the model may appear to be able to perfectly follow known data, but when asked to make a prediction about future data it will likely be wrong. This is one of the main reasons why machine learning is not a good tool for this task, since there is not enough data to avoid overfitting a model. Validating the model To validate our model, let's try to fit it again without looking at the last 3 days of data. Then, we can estimate the missing days using our model, and verify if the results still hold by comparing what the model thought was going to happen with the actual data.
params_validate, _ = optimize.curve_fit(logistic_function, X[:-ESTIMATE_DAYS], y[:-ESTIMATE_DAYS]) # Project zero for all values except for the last ESTIMATE_DAYS projected = [0] * len(X[:-ESTIMATE_DAYS]) + [logistic_function(x, *params_validate) for x in X[-ESTIMATE_DAYS:]] projected = pd.Series(projected, index=df.index, name='Projected') confirmed = pd.DataFrame({'Ground Truth': df['total_confirmed'], 'Projected': projected}) ax = confirmed.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params_validate) for x in X] ax.plot(df.index, estimate, color='red', label='Estimate') ax.legend();
examples/logistic_modeling.ipynb
GoogleCloudPlatform/covid-19-open-data
apache-2.0
Projecting future data It looks like our logistic model slightly underestimates the confirmed cases. This indicates that the model is optimistic about the slowdown of new cases being reported. A number of factors could affect this, like wider availability of tests. Ultimately, it is also possible that the logistic model is not an appropriate function to use. However, the predictions are close enough to the real data that this is probably a good starting point for a rough estimate over a short time horizon. Now, let's use the model we fitted earlier which used all the data, and try to predict what the next 3 days will look like.
import datetime # Append N new days to our indices date_format = '%Y-%m-%d' date_range = [datetime.datetime.strptime(date, date_format) for date in df.index] for _ in range(ESTIMATE_DAYS): date_range.append(date_range[-1] + datetime.timedelta(days=1)) date_range = [datetime.datetime.strftime(date, date_format) for date in date_range] # Perform projection with the previously estimated parameters projected = [0] * len(X) + [logistic_function(x, *params) for x in range(len(X), len(X) + ESTIMATE_DAYS)] projected = pd.Series(projected, index=date_range, name='Projected') df_ = pd.DataFrame({'Confirmed': df['total_confirmed'], 'Projected': projected}) ax = df_.plot(kind='bar', figsize=(16, 8)) estimate = [logistic_function(x, *params) for x in range(len(date_range))] ax.plot(date_range, estimate, color='red', label='Estimate') ax.legend();
examples/logistic_modeling.ipynb
GoogleCloudPlatform/covid-19-open-data
apache-2.0
Getting a Single Graph Using NetworkX
%%time graph_by_network_id = m.get_graph_by_id(11) print_summary(graph_by_network_id)
Database Comparison - Blobs versus Edge Store.ipynb
pybel/pybel-notebooks
apache-2.0
Using SQL
def get_graph_by_network_edges(manager, network_id, **kwargs): network = manager.get_network_by_id(network_id) edges = network.edges graph = BELGraph(**kwargs) for edge in edges: edge.insert_into_graph(graph) return graph %%time graph_by_edges = get_graph_by_network_edges(m, 11) print_summary(graph_by_edges)
Database Comparison - Blobs versus Edge Store.ipynb
pybel/pybel-notebooks
apache-2.0
This query works, but needs serious optimization to be generally useful, especially since this kind of query automatically eliminates the need to do in-memory graph join operations. Getting Multiple Graphs Using NetworkX
network_ids = [10, 2, 9] %%time graph_by_network_ids = m.get_graph_by_ids(network_ids) print_summary(graph_by_network_ids)
Database Comparison - Blobs versus Edge Store.ipynb
pybel/pybel-notebooks
apache-2.0
Using SQL
def get_graph_by_networks_edges(manager, network_ids, **kwargs): edges = manager.session.query(Edge).join(network_edge).filter(network_edge.c.network_id.in_(network_ids)) graph = BELGraph(**kwargs) for edge in edges: edge.insert_into_graph(graph) return graph %%time graph_by_networks_edges = get_graph_by_networks_edges(m, network_ids) print_summary(graph_by_networks_edges)
Database Comparison - Blobs versus Edge Store.ipynb
pybel/pybel-notebooks
apache-2.0
Getting Edges Matching an Annotation
graph = def get_graph_by_annotation(manager, keyword, value, **kwargs): edges = manager.session.query(Edge).\ join(network_edge).join(edge_annotation).join(AnnotationEntry)\ filter(_and(network_edge.c.network_id.in_(network_ids), edge_annotation.c.annotation_id == annotation_id)) graph = BELGraph(**kwargs) for edge in edges: edge.insert_into_graph(graph) return graph annotation = m.get_annotation_entry
Database Comparison - Blobs versus Edge Store.ipynb
pybel/pybel-notebooks
apache-2.0
Discovering Driver Nodes / Control Nodes
display(HTML('<h3>Control State Transition Graph (CSTG)</h3>')) # THIS MIGHT TAKE A LONG TIME, it is here for demo purposes. driver_nodes = N.attractor_driver_nodes(min_dvs=1, max_dvs=6, verbose=True) print(N.get_node_name(driver_nodes)) #> ['AP2', 'EMF1', 'LFY', 'TFL1', 'UFO', 'WUS'], ['AG', 'EMF1', 'LFY', 'TFL1', 'UFO', 'WUS'] display(HTML('<h3>Structural Controlability (SC)</h3>')) SC = N.structural_controllability_driver_nodes(keep_self_loops=False) print(N.get_node_name(SC)) display(HTML('<h3>Minimum Dominating Set (MDS)</h3>')) MDS = N.minimum_dominating_set_driver_nodes(max_search=10) print(N.get_node_name(MDS)) display(HTML('<h3>Feedback Vertex Control (FVS)</h3>')) FVS_g = N.feedback_vertex_set_driver_nodes(graph='structural', method='grasp', remove_constants=True) print(N.get_node_name(FVS_g) , '(grasp)') FVS_bf = N.feedback_vertex_set_driver_nodes(graph='structural', method='bruteforce', remove_constants=True) print(N.get_node_name(FVS_bf) , '(bruteforce)') #display(HTML('<h3>Pinning Control (PC)</h3>')) #under development
tutorials/Control - Thaliana.ipynb
rionbr/CANA
mit
Differences between the Control Methods
# Control via State Transition Graph (CSTG) CSTGs = [['AP2', 'EMF1', 'LFY', 'TFL1', 'UFO', 'WUS'], ['AG', 'EMF1', 'LFY', 'TFL1', 'UFO', 'WUS'] ] # Pinning Control PCs = [['AP3', 'UFO', 'AP1', 'LFY', 'WUS', 'AG'], ['AP3', 'UFO', 'EMF1', 'WUS', 'AG', 'TFL1'], ['AP3', 'UFO', 'LFY', 'WUS', 'AG', 'TFL1'] ] # Feedback Vertex Control # (threshold,loops,[control sets]) FVS_Objs = [ ('original',49, [['AP3','UFO','LFY','WUS','AG','TFL1','PI']]), ('0',19, [['AP3','UFO','AP1','LFY','WUS' 'AG','PI'],['AP3','UFO','EMF1','WUS','AG','TFL1','PI'],['AP3','UFO','LFY','WUS','AG','TFL1','PI']]), ('0.0078125',17,[['AP3', 'UFO', 'AP1', 'LFY', 'WUS', 'PI']]), ('0.0234375',14,[['AP3', 'UFO', 'AP1', 'LFY', 'WUS', 'PI'],['AP3', 'UFO', 'EMF1', 'WUS', 'TFL1', 'PI'],['AP3', 'UFO', 'LFY', 'WUS', 'TFL1', 'PI']]), ('0.03125',14,[['AP3', 'UFO', 'AP1', 'LFY', 'WUS', 'PI'],['AP3', 'UFO', 'EMF1', 'WUS', 'TFL1', 'PI'],['AP3', 'UFO', 'LFY', 'WUS', 'TFL1', 'PI']]), ('0.046875',14,[['AP3', 'UFO', 'AP1', 'LFY', 'WUS', 'PI'],['AP3', 'UFO', 'EMF1', 'WUS', 'TFL1', 'PI'],['AP3', 'UFO', 'LFY', 'WUS', 'TFL1', 'PI']]), ('0.09375',13,[['AP3', 'UFO', 'AP1', 'LFY', 'WUS'],['AP3', 'UFO', 'EMF1', 'WUS', 'TFL1'],['AP3', 'UFO', 'LFY', 'WUS', 'TFL1']]), ('0.125',10,[['AP3', 'UFO', 'AP1', 'LFY', 'WUS'],['AP3', 'UFO', 'EMF1', 'WUS', 'TFL1'],['AP3', 'UFO', 'LFY', 'WUS', 'AG'],['AP3', 'UFO', 'LFY', 'WUS', 'TFL1']]), ('0.140625',8,[['UFO', 'AP1', 'LFY', 'WUS'],['UFO', 'EMF1', 'WUS', 'TFL1'],['UFO', 'LFY', 'WUS', 'AG'],['UFO', 'LFY', 'WUS', 'TFL1']]), ('0.25',3,[['UFO', 'EMF1', 'WUS', 'TFL1'],['UFO', 'LFY', 'WUS', 'TFL1']]), ('0.2734375',3,[['UFO', 'EMF1', 'WUS', 'TFL1'],['UFO', 'LFY', 'WUS', 'TFL1']]), ('0.28125',3,[['UFO', 'EMF1', 'WUS', 'TFL1'],['UFO', 'LFY', 'WUS', 'TFL1']]), ('0.34375',3,[['UFO', 'EMF1', 'WUS', 'TFL1'],['UFO', 'LFY', 'WUS', 'TFL1']]), ('0.453125',3,[['AP3', 'UFO', 'EMF1', 'WUS', 'TFL1'],['AP3', 'UFO', 'LFY', 'WUS', 'TFL1']]), ('0.5',2,[['AP3', 'UFO', 'FUL', 'LFY', 'WUS', 'TFL1']]), ('0.65625',2,[['AP3', 'UFO', 'FUL', 'LFY', 'WUS', 'TFL1', 'PI']]), ('0.7265625',2,[['AP3', 'UFO', 'FUL', 'LFY', 'WUS', 'AG', 'TFL1', 'PI']]), ('0.75',1,[['AP3', 'UFO', 'FUL', 'LFY', 'WUS', 'AG', 'TFL1', 'PI']]), ('0.875',1,[['AP3', 'UFO', 'FUL', 'AP1', 'LFY', 'WUS', 'AG', 'TFL1', 'PI']]) ] # Sort sets by alphabetical order CSTGs = [sorted(x) for x in CSTGs] PCs = [sorted(x) for x in PCs] FVS_Objs = [(name,loops,[sorted(x) for x in sets]) for (name,loops,sets) in FVS_Objs] def jaccard(u,v): return len(u.intersection(v)) / len(u.union(v)) display(HTML("<h2>PC vs FVS</h2>")) for FVS_O in FVS_Objs: print('T: %s' % (FVS_O[0])) for PC, FVS in product(PCs,FVS_O[2]): FVSset, PCset = set(FVS), set(PC) FVSstr, PCstr = ','.join(FVS), ','.join(PC) inclusion = PCset.issubset(FVSset) print('PC in FVS=%i ; J=%.3f [%s <-> %s]' % (inclusion, jaccard(PCset,FVSset) , PCstr , FVSstr)) print display(HTML("<h2>CSTG vs FVS</h2>")) for FVS_O in FVS_Objs: print('T: %s' % (FVS_O[0])) for CSTG, FVS in product(CSTGs,FVS_O[2]): FVSset, CSTGset = set(FVS), set(CSTG) FVSstr, CSTGstr = ','.join(FVS), ','.join(CSTG) inclusion = CSTGset.issubset(FVSset) print('CSTG in FVS=%i ; J=%.3f [%s <-> %s]' % (inclusion, jaccard(CSTGset,FVSset) , CSTGstr , FVSstr)) print
tutorials/Control - Thaliana.ipynb
rionbr/CANA
mit
Upsampling Minority Class To Match Majority
# Indicies of each class' observations i_class0 = np.where(y == 0)[0] i_class1 = np.where(y == 1)[0] # Number of observations in each class n_class0 = len(i_class0) n_class1 = len(i_class1) # For every observation in class 1, randomly sample from class 0 with replacement i_class0_upsampled = np.random.choice(i_class0, size=n_class1, replace=True) # Join together class 0's upsampled target vector with class 1's target vector np.concatenate((y[i_class0_upsampled], y[i_class1]))
machine-learning/handling_imbalanced_classes_with_upsampling.ipynb
tpin3694/tpin3694.github.io
mit
Exponential, geometric, and polynomial growth Exercise: Suppose there are two banks across the street from each other, The First Geometric Bank (FGB) and Exponential Savings and Loan (ESL). They offer the same interest rate on checking accounts, 3%, but at FGB, they compute and pay interest at the end of each year, and at ESL they compound interest continuously. If you deposit $p_0$ dollars at FGB at the beginning of Year 0, the balanace of your account at the end of Year $n$ is $ x_n = p_0 (1 + \alpha)^n $ where $\alpha = 0.03$. At ESL, your balance at any time $t$ would be $ x(t) = p_0 \exp(\alpha t) $ If you deposit \$1000 at each back at the beginning of Year 0, how much would you have in each account after 10 years? Is there an interest rate FGB could pay so that your balance at the end of each year would be the same at both banks? What is it? Hint: modsim provides a function called exp, which is a wrapper for the NumPy function exp.
# Solution p_0 = 1000 alpha = 0.03 # Solution ts = linrange(11) # Solution geometric = p_0 * (1 + alpha) ** ts # Solution exponential = p_0 * exp(alpha * ts) # Solution alpha2 = exp(alpha) - 1 # Solution geometric = p_0 * (1 + alpha2) ** ts # Solution plot(ts, exponential, '-', label='Exponential') plot(ts, geometric, 's', label='Geometric') decorate(xlabel='Time (years)', ylabel='Value (dollars)')
soln/interest.ipynb
AllenDowney/ModSimPy
mit
Exercise: Suppose a new bank opens called the Polynomial Credit Union (PCU). In order to compete with First Geometric Bank and Exponential Savings and Loan, PCU offers a parabolic savings account where the balance is a polynomial function of time: $ x(t) = p_0 + \beta_1 t + \beta_2 t^2 $ As a special deal, they offer an account with $\beta_1 = 30$ and $\beta_2 = 0.5$, with those parameters guaranteed for life. Suppose you deposit \$1000 at all three banks at the beginning of Year 0. How much would you have in each account at the end of Year 10? How about Year 20? And Year 100?
# Solution number_of_years = 100 ts = linrange(number_of_years+1) geometric = p_0 * (1 + alpha2) ** ts exponential = p_0 * exp(alpha * ts) None # Solution beta1 = 30 beta2 = 0.5 parabolic = p_0 + beta1 * ts + beta2 * ts**2 None # Solution def plot_results(): plot(ts, exponential, '-', label='Exponential') plot(ts, geometric, 's', label='Geometric') plot(ts, parabolic, 'o', label='Parabolic') decorate(xlabel='Time (years)', ylabel='Value (dollars)') # Solution plot_results() # Solution plot_results() plt.yscale('log')
soln/interest.ipynb
AllenDowney/ModSimPy
mit
This will populate the structcol namespace with a few functions and classes. You will probably find it easiest to keep all your calculations within a Jupyter notebook like this one. The package itself contains only generic functions and classes (that is, it doesn't include any specific calculations of structural color spectra beyond the ones in this notebook). For calculations in a notebook, you'll want to import some other packages too, like numpy and matplotlib:
%matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt import scipy.integrate # require seaborn (not installed by default in Anaconda; comment out if not installed) import seaborn as sns
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Using quantities with units The structural-color package uses the pint package to keep track of units and automatically convert them. To define a quantity with units, use the structcol.Quantity constructor. For example, to define a wavelength of 0.45 $\mu$m:
wavelen = sc.Quantity('0.45 um') print(wavelen) print(wavelen.dimensionality)
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Converting between units:
print(wavelen.to('m'))
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Units work in numpy arrays, too:
wavelens = sc.Quantity(np.arange(450.0, 800.0, 10.0), 'nm') print(wavelens.to('um'))
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Refractive index module To use the refractive index module:
import structcol.refractive_index as ri
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
This module contains dispersion relations for a number of materials. For example, to get the index of polystyrene at 500 nm, you can call
ri.n('polystyrene', sc.Quantity('500 nm'))
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
You must give this function a quantity with units as the second argument. If you give it a number, it will throw an error, rather than trying to guess what units you're thinking of. You can also calculate the refractive index at several different wavelengths simultaneously, like this (using wavelens array from above):
n_particle = ri.n('polystyrene', wavelens) plt.plot(wavelens, n_particle) plt.ylabel('$n_\mathrm{PS}$') plt.xlabel('wavelength (nm)')
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
You can use complex refractive indices by adding the imaginary component of the index. Note that in python the imaginary number $i$ is denoted by $j$. You can choose to use the values from literature or from experimental measurement:
ri.n('polystyrene', sc.Quantity('500 nm'))+0.0001j
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Importing your own refractive index data You can input your own refractive index data (which can be real or complex) by calling the material $\textbf{'data'}$ and specifying the optional parameters $\textbf{'index_data'}$, $\textbf{'wavelength_data'}$, and $\textbf{'kind'}$. index_data: refractive index data from literature or experiment that the user can input if desired. The data is interpolated, so that the user can call specific values of the index. The index data can be real or complex. wavelength_data: wavelength data corresponding to index_data. Must be specified as a Quantity. kind: type of interpolation. The options are: ‘linear’, ‘nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘previous’, ‘next', where ‘zero’, ‘slinear’, ‘quadratic’ and ‘cubic’ refer to a spline interpolation of zeroth, first, second or third order; ‘previous’ and ‘next’ simply return the previous or next value of the point. The default is 'linear'.
wavelength_values = sc.Quantity(np.array([400,500,600]), 'nm') index_values= sc.Quantity(np.array([1.5,1.55,1.6]), '') wavelength = sc.Quantity(np.arange(400, 600, 1), 'nm') n_data = ri.n('data', wavelength, index_data=index_values, wavelength_data=wavelength_values) plt.plot(wavelength, n_data, '--', label='fit') plt.plot(wavelength_values, index_values, '.', markersize=18, label='data') plt.ylabel('$n_\mathrm{data}$') plt.xlabel('wavelength (nm)') plt.legend();
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0
Calculating a reflection spectrum With the tools above we can calculate a reflection spectrum using the single-scattering model described in Magkiriadou, S., Park, J.-G., Kim, Y.-S., and Manoharan, V. N. “Absence of Red Structural Color in Photonic Glasses, Bird Feathers, and Certain Beetles” Physical Review E 90, no. 6 (2014): 62302. doi:10.1103/PhysRevE.90.062302 The effective refractive index of the sample can be calculated either with the Maxwell-Garnett formulation or the Bruggeman equation. The Bruggeman equation is the default option for our calculations, because Maxwell-Garnett is not valid when the volume fractions of the components are comparable, which is often the case in structural color samples (Markel, V. A., "Introduction to the Maxwell Garnett approximation: tutorial", Journal of the Optical Socienty of America A, 33, no. 7 (2016)). In addition, Maxwell Garnett only works for systems of two components (e.g. a particle index and a matrix index), whereas Bruggeman can be applied to multicomponent systems such as core-shell particles. The model can also handle absorbing systems, either with an absorbing particle or an absorbing matrix. Then the corresponding complex refractive indices must be specified.
# uncomment the line below to time how long this calculation takes # %%timeit from structcol import model # parameters for our colloidal sample volume_fraction = sc.Quantity(0.64, '') radius = sc.Quantity('125 nm') # wavelengths of interest wavelength = sc.Quantity(np.arange(400., 800., 10.0), 'nm') # calculate refractive indices at wavelengths of interest n_particle = sc.Quantity(1.53, '')#ri.n('polystyrene', wavelength) n_matrix = ri.n('vacuum', wavelength) n_medium = n_matrix # now calculate the reflection spectrum, asymmetry parameter (g), and # transport length (lstar) refl = np.zeros(wavelength.size) g = np.zeros(wavelength.size) # note the units explicitly assigned to the transport length; you # must specify a length unit here lstar = np.zeros(wavelength.size)*sc.ureg('um') for i in range(wavelength.size): # the first element in the tuple is the reflection coefficient for # unpolarized light. The next two (which we skip) are the # coefficients for parallel and perpendicularly polarized light. # Third is the asymmetry parameter, and fourth the transport length refl[i], _, _, g[i], lstar[i] = model.reflection(n_particle, n_matrix[i], n_medium[i], wavelength[i], radius, volume_fraction, thickness = sc.Quantity('4000.0 nm'), theta_min = sc.Quantity('90 deg'), maxwell_garnett=False) # the default option is False fig, (ax_a, ax_b, ax_c) = plt.subplots(nrows=3, figsize=(8,8)) ax_a.plot(wavelength, refl) ax_a.set_ylabel('Reflected fraction (unpolarized)') ax_b.plot(wavelength, g) ax_b.set_ylabel('Asymmetry parameter') ax_c.semilogy(wavelength, lstar) ax_c.set_ylabel('Transport length (μm)') ax_c.set_xlabel('wavelength (nm)')
tutorial.ipynb
manoharan-lab/structural-color
gpl-3.0