markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Validation Accuracy: 95.92% Congratulations You've built a feedforward neural network in Keras! Don't stop here! Next, you'll add a convolutional layer to drive.py. Convolutions Build a new network, similar to your existing network. Before the hidden layer, add a 3x3 convolutional layer with 32 filters and valid padding. Then compile and train the network. Hint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review. Hint 2: Now that the first layer of the network is a convolutional layer, you no longer need to reshape the input images before passing them to the network. You might need to reload your training data to recover the original shape. Hint 3: Add a Flatten() layer between the convolutional layer and the fully-connected hidden layer.
from keras.layers import Convolution2D, MaxPooling2D, Dropout, Flatten # TODO: Re-construct the network and add a convolutional layer before the first fully-connected layer. model = Sequential() model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3))) model.add(Activation('relu')) model.add(Flatten()) model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1')) model.add(Activation('relu')) model.add(Dense(43, name='output')) model.add(Activation('softmax')) # TODO: Compile and train the model. model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=10, validation_data=(X_val, Y_val), verbose=1) # STOP: Do not change the tests below. Your implementation should pass these tests. assert(history.history['val_acc'][0] > 0.9), "The validation accuracy is: %.3f" % history.history['val_acc'][0]
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Validation Accuracy: 96.98% Pooling Re-construct your network and add a 2x2 pooling layer immediately following your convolutional layer. Then compile and train the network.
# TODO: Re-construct the network and add a pooling layer after the convolutional layer. model = Sequential() model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1')) model.add(Activation('relu')) model.add(Dense(43, name='output')) model.add(Activation('softmax')) # TODO: Compile and train the model. model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=10, validation_data=(X_val, Y_val), verbose=1) # STOP: Do not change the tests below. Your implementation should pass these tests. ## Fixed bug assert(history.history['val_acc'][-1] > 0.9), "The validation accuracy is: %.3f" % history.history['val_acc'][0]
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Validation Accuracy: 97.36% Dropout Re-construct your network and add dropout after the pooling layer. Set the dropout rate to 50%.
# TODO: Re-construct the network and add dropout after the pooling layer. model = Sequential() model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(128, input_shape=(flat_img_size,), name='hidden1')) model.add(Activation('relu')) model.add(Dense(43, name='output')) model.add(Activation('softmax')) # TODO: Compile and train the model. model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=10, validation_data=(X_val, Y_val), verbose=1) # STOP: Do not change the tests below. Your implementation should pass these tests. assert(history.history['val_acc'][-1] > 0.9), "The validation accuracy is: %.3f" % history.history['val_acc'][0]
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Validation Accuracy: 97.75% Optimization Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code. Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs. What is the best validation accuracy you can achieve?
pool_size = (2,2) model = Sequential() model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) model.add(Convolution2D(64, 3, 3)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) model.add(Dropout(0.5)) model.add(Convolution2D(128, 3, 3)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(256, input_shape=(flat_img_size,), name='hidden1')) model.add(Activation('relu')) model.add(Dense(43, name='output')) model.add(Activation('softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(X_train, Y_train, batch_size=128, nb_epoch=50, validation_data=(X_val, Y_val), verbose=1)
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Best Validation Accuracy: 99.65% Testing Once you've picked out your best model, it's time to test it. Load up the test data and use the evaluate() method to see how well it does. Hint 1: After you load your test data, don't forget to normalize the input and one-hot encode the output, so it matches the training data. Hint 2: The evaluate() method should return an array of numbers. Use the metrics_names() method to get the labels.
# with open('./test.p', mode='rb') as f: # test = pickle.load(f) # X_test = test['features'] # y_test = test['labels'] # X_test = X_test.astype('float32') # X_test /= 255 # X_test -= 0.5 # Y_test = np_utils.to_categorical(y_test, 43) model.evaluate(X_test, Y_test) model.save('test-acc-9716-epoch50.h5') from keras.models import load_model model2 = load_model('test-acc-9716-epoch50.h5') # model2.evaluate(X_test, Y_test) model2.summary()
Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb
thomasantony/CarND-Projects
mit
Python standard function dir(obj) gets all member names of an object. Lest's see what are in the FORTH kernel vm:
print(dir(vm))
notebooks/tutor.ipynb
hcchengithub/project-k
mit
I only want you to see that there are very few properties and methods in this FORTH kernel object and many of them are conventional FORTH tokens like code, endcode, comma, compiling, dictionary, here, last, stack, pop, push, tos, rpop, rstack, rtos, tib, ntib, tick, and words. Now let's play The property vm.stack is the FORTH data stack which is empty at first.
vm.stack
notebooks/tutor.ipynb
hcchengithub/project-k
mit
vm.dictate() method is the way project-k VM receives your commands (a string). It actually is also the way we feed it an entire FORTH source code file. Everything given to vm.dictate() is like a command line you type to the FORTH system as simple as only a number:
vm.dictate("123") vm.stack
notebooks/tutor.ipynb
hcchengithub/project-k
mit
The first line above dictates project-k VM to push 123 onto the data stack and the second line views the data stack. We can even cascade these two lines into one:
vm.dictate("456").stack
notebooks/tutor.ipynb
hcchengithub/project-k
mit
because vm.dictate() returns the vm object itself. project-k VM knows only two words 'code' and 'end-code' at first Let's define a FORTH command (or 'word') that prints "Hello World!!":
vm.dictate("code hi! print('Hello World!!') end-code"); # define the "hi!" comamnd where print() is a standard python function vm.dictate("hi!");
notebooks/tutor.ipynb
hcchengithub/project-k
mit
Did you know what have we done? We defined a new FORTH code word! By the way, we can use any character in a word name except white spaces. This is a FORTH convention. Define the 'words' command to view all words I'd like to see what are all the words we have so far. The FORTH command 'words' is what we want now but this tiny FORTH system does not have it yet. We have to define it:
vm.dictate("code words print([w.name for w in vm.words['forth'][1:]]) end-code") vm.dictate("words");
notebooks/tutor.ipynb
hcchengithub/project-k
mit
In the above definition the vm.words is a python dictionary (not FORTH dictionary) defined in the project-k VM object as a property which is something like an array of all recent words in the recent vocabulary named forth which is the only one vocabulary comes with the FORTH kernel. Where a FORTH 'vocabulary' is simply a key in python dictionary key:value pair. We have only 4 words so far as the words new command show above. Where 'code' and 'end-code' are built-in in the FORTH kernel; 'hi!' and 'words' were defined above. Define '+' and conventional FORTH words '.s' , and 's"' Next exercise is to define some more FORTH words.
vm.dictate("code + push(pop(1)+pop()) end-code"); # pop two operands from FORTH data stack and push back the result vm.dictate("code .s print(stack) end-code"); # print the FORTH data stack vm.dictate('code s" push(nexttoken(\'"\'));nexttoken() end-code'); # get a string vm.dictate('words'); # list all recent words
notebooks/tutor.ipynb
hcchengithub/project-k
mit
This example demonstrates how to use built-in methods push(), pop(), nexttoken() and the stack property (or global variable). As shown in above definitions, we can omit vm. so vm.push, vm.stack are simplified to push, stack because code ... end-code definitions are right in the VM name space. Now let's try these new words:
vm.stack = [] # clear the data stack vm.dictate(' s" Forth "') # get the string 'Forth ' vm.dictate(' s" is the easist "') # get the string 'is the easist ' vm.dictate(' s" programming langage."') # get the string 'programing language.' vm.dictate('.s'); # view the data stack print(vm.dictate('+').stack) # concatenate top two strings print(vm.dictate('+').stack) # concatenate the reset
notebooks/tutor.ipynb
hcchengithub/project-k
mit
The + command can certainly concatenate strings together and also can add numbers because Python's + operator works that way. Please try it with integers and floating point numbers:
print(vm.dictate('123 456 + ').pop()); # Push 123, push 456, add them print(vm.dictate('1.23 45.6 + ').pop());
notebooks/tutor.ipynb
hcchengithub/project-k
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function shape = (None, image_shape[0], image_shape[1], image_shape[2]) return tf.placeholder(tf.float32, shape, name = 'x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function shape = (None, n_classes) return tf.placeholder(tf.float32, shape, name = 'y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name = 'keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
javoweb/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function # Setting parameters w_shape = (conv_ksize[0], conv_ksize[1], x_tensor.get_shape()[3].value, conv_num_outputs) conv_stride = (1, conv_strides[0],conv_strides[1],1) p_ksize = (1, pool_ksize[0], pool_ksize[1], 1) p_stride = (1, pool_strides[0], pool_strides[1], 1) # Convolution layer weights = tf.Variable(tf.truncated_normal(w_shape, stddev = 0.1)) bias = tf.Variable(tf.zeros(conv_num_outputs)) conv2d = tf.nn.conv2d(x_tensor, weights, conv_stride, padding = 'VALID') conv2d = tf.nn.bias_add(conv2d, bias) conv2d = tf.nn.relu(conv2d) return tf.nn.max_pool(conv2d, p_ksize, p_stride, padding = 'VALID') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
javoweb/deep-learning
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function f_size = x_tensor.get_shape()[1].value * x_tensor.get_shape()[2].value * x_tensor.get_shape()[3].value return tf.reshape(x_tensor, shape = [-1 , f_size]) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/dlnd_image_classification.ipynb
javoweb/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function n_inputs = x_tensor.get_shape()[1].value weights = tf.Variable(tf.truncated_normal((n_inputs, num_outputs), stddev = 0.1)) bias = tf.Variable(tf.zeros(num_outputs)) f_nn = tf.add(tf.matmul(x_tensor, weights), bias) return tf.nn.relu(f_nn) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
javoweb/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function n_inputs = x_tensor.get_shape()[1].value weights = tf.Variable(tf.truncated_normal((n_inputs, num_outputs), stddev = 0.1)) bias = tf.Variable(tf.zeros(num_outputs)) return tf.add(tf.matmul(x_tensor, weights), bias) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
javoweb/deep-learning
mit
1.4 Visulze Features
data.describe() data.describe(include = ['object']) ind = data['rating_diff'].argmax() print data.iloc[ind].movie_title print data.iloc[ind].scaled_imdb print data.iloc[ind].scaled_douban print data.iloc[ind].title_year print data.iloc[ind].movie_imdb_link print data.iloc[ind].d_year print data.iloc[ind].douban_score print data.iloc[ind].imdb_score data.columns # 2. Predict differences in ratings res_dat['diff_rating'] = res_dat['douban_score']-res_dat['imdb_score'] # 2.1. covert categorical variable Genre to Dummy variables # only extract the first genre out of the list to simplify the problem res_dat['genre1'] = res_dat.apply(lambda row:(row['genres'].split('|'))[0],axis = 1) #res_dat['genre1'].value_counts() # Because there are 21 genres, here we only choose the top 7 to convert to index top_genre = ['Comedy','Action','Drama','Adventure','Crime','Biography','Horror'] # The rest of genre types we just consider them as others res_dat['top_genre'] = res_dat.apply(lambda row:row['genre1'] if row['genre1'] in top_genre else 'Other',axis =1) #select num_user_for_reviews ,director_facebook_likes ,actor_1_facebook_likes ,gross , genres, #budget,# dnum_review # for EDA res_subdat = res_dat[['top_genre','num_user_for_reviews','director_facebook_likes','actor_1_facebook_likes','gross','budget','dnum_review','diff_rating']] res_subdat = pd.get_dummies(res_subdat,prefix =['top_genre']) #res_dat = pd.get_dummies(res_dat,prefix = ['top_genre']) res_subdat.shape # create a subset for visualization and preliminary analysis col2 = [u'num_user_for_reviews', u'director_facebook_likes', u'actor_1_facebook_likes', u'gross', u'budget', u'dnum_review', u'top_genre_Action', u'top_genre_Adventure', u'top_genre_Biography', u'top_genre_Comedy', u'top_genre_Crime', u'top_genre_Drama', u'top_genre_Horror', u'top_genre_Other',u'diff_rating'] res_subdat = res_subdat[col2] # a subset for plotting correlation col_cat = [u'gross', u'budget', u'dnum_review',u'num_user_for_reviews',u'top_genre_Action', u'top_genre_Adventure', u'top_genre_Biography', u'top_genre_Comedy', u'top_genre_Crime', u'top_genre_Drama', u'top_genre_Horror', u'diff_rating'] res_subdat_genre = res_subdat[col_cat] # show pair-wise correlation between differences in ratings and estimators import matplotlib.pylab as plt import numpy as np corr = res_subdat_genre.corr() sns.set(style = "white") f,ax = plt.subplots(figsize=(11,9)) cmap = sns.diverging_palette(220,10,as_cmap=True) mask = np.zeros_like(corr,dtype = np.bool) sns.heatmap(corr,mask = mask,cmap = cmap, vmax=.3,square = True, linewidths = .5, cbar_kws = {"shrink": .5},ax = ax) # prepare trainning set and target set col_train = col2[:len(col2)-1] col_target = col2[len(col2)-1] #cl_res_subdat = res_subdat.dropna(axis =0) cl_res_subdat.shaperating_diff # 2.2 Use Random Forest Regressor for prediction X_cat = res_subdat.ix[:,'top_genre_Action':'top_genre_Other'] num_col = [] for i in res_dat.columns: if res_dat[i].dtype != 'object': num_col.append(i) X_num = res_dat[num_col] X = pd.concat([X_cat,X_num],axis = 1) X = X.dropna(axis = 0) y = X['diff_rating'] X = X.iloc[:,:-1] X.drop(['imdb_score','douban_score'],axis = 1,inplace = True) from sklearn.model_selection import train_test_split # METHOD 1: BUILD randomforestregressor X_train,X_val,y_train,y_val = train_test_split(X,y,test_size = 0.1,random_state = 42) from sklearn.ensemble import RandomForestRegressor rf = RandomForestRegressor(n_estimators = 500) forest = rf.fit(X_train, y_train) score_r2 = rf.score(X_val,y_val) # print: R-sqr print score_r2 rf_features = sorted(zip(map(lambda x: round(x, 4), rf.feature_importances_), X.columns),reverse = True) import matplotlib.pyplot as plt; imps,feas = zip(*(rf_features[0:4]+rf_features[6:12])) ypos = np.arange(len(feas)) plt.barh(ypos,imps,align = 'center',alpha = 0.5) plt.yticks(ypos,feas) plt.xlabel('Feature Importance') plt.subplot(1,2,1) plt.plot(y_train,rf.predict(X_train),'o') plt.xlabel('Training_y') plt.ylabel('Predict_y') plt.xlim(-6,6) plt.ylim(-6,6) plt.subplot(1,2,2) plt.plot(y_val,rf.predict(X_val),'o') plt.xlabel('val_y') plt.ylabel('Predict_y') plt.xlim(-3,4) plt.ylim(-3,4) X.columns # Lasso method from sklearn.linear_model import Lasso Lassoreg = Lasso(alpha = 1e-4,normalize = True,random_state = 42) Lassoreg.fit(X,y) score_r2 = Lassoreg.score(X_val,y_val) print score_r2 Ls_features = sorted(zip(map(lambda x:round(x,4),Lassoreg.coef_),X.columns)) print Ls_features y_val_rf = rf.predict(X_val) y_val_Ls = Lassoreg.predict(X_val) y_val_pred = (y_val_rf+y_val_Ls)/2 from sklearn.metrics import r2_score print r2_score(y_val,y_val_pred) import matplotlib.pyplot as plt; imps,feas = zip(*(Ls_features[0:4]+Ls_features[-4:])) ypos = np.arange(len(feas)) plt.barh(ypos,imps,align = 'center',alpha = 0.5) plt.yticks(ypos,feas) plt.xlabel('Feature Importance (Coefficient)') plt.subplot(1,2,1) plt.plot(y_train,Lassoreg.predict(X_train),'o') plt.xlabel('Training_y') plt.ylabel('Predict_y') plt.xlim(-6,6) plt.ylim(-6,6) plt.subplot(1,2,2) plt.plot(y_val,Lassoreg.predict(X_val),'o') plt.xlabel('val_y') plt.ylabel('Predict_y') plt.xlim(-3,4) plt.ylim(-3,4)
Movie_Rating/.ipynb_checkpoints/Culture_difference_movie_rating-checkpoint.ipynb
sadahanu/DataScience_SideProject
mit
Now analyze the model performance:
store.display_tfma_analysis(<insert model ID here>, slicing_column='trip_start_hour')
tfx/examples/airflow_workshop/notebooks/step6.ipynb
tensorflow/tfx
apache-2.0
Now plot the artifact lineage:
# Try different IDs here. Click stop in the plot when changing IDs. %matplotlib notebook store.plot_artifact_lineage(<insert model ID here>)
tfx/examples/airflow_workshop/notebooks/step6.ipynb
tensorflow/tfx
apache-2.0
Create a sample 2D Image Gaussians are placed on a grid with some random small offsets the variable coords are the known positions these will not be known in a real experiment
# Create coordinates with a random offset coords = peakFind.lattice2D_2((1, 0), (0, 1), 2, 2, (0, 0), (5, 5)) coords += np.random.rand(coords.shape[0], coords.shape[1]) / 2.5 coords = np.array(coords)*30 + (100, 100) print('Coords shape = {}'.format(coords.shape)) # Create an image with the coordinates as gaussians kernel_shape = (11, 11) simIm = peakFind.peaksToImage(coords, (512, 512), (1.75, 2.75), kernel_shape) fg, ax = plt.subplots(1, 2, sharex=True,sharey=True) ax[0].imshow(simIm) ax[1].imshow(simIm) ax[1].scatter(coords[:,1], coords[:,0],c='r',marker='.') fg.tight_layout()
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Find the center pixel of each peak uses ncempy.algo.peakFind.peakFind2D() These will be integer values of the max peak positions. Gaussian fitting will be used to find the smal random offsets See end of notebook for an explanation as to how this works.
coords_found = peakFind.peakFind2D(simIm, 0.5) fg, ax = plt.subplots(1,1) ax.imshow(simIm) _ = ax.scatter(coords_found[:,1],coords_found[:,0],c='r',marker='x')
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Use Gaussian fitting for sub-pixel fitting Each peak is fit to a 2D Gaussian function The average of the sigma values is printed
optPeaks, optI, fittingValues = peakFind.fit_peaks_gauss2D(simIm, coords_found, 5, (1.5, 2.5), ((-1.5, -1.5,0,0),(1.5,1.5,3,3))) # Plot the gaussian widths f2, ax2 = plt.subplots(1, 2) ax2[0].plot(optPeaks[:, 2],'go') ax2[0].plot(optPeaks[:, 3],'ro') ax2[0].set(title='Gaussian fit sigmas',xlabel='index sorted by peak intensity') ax2[0].legend(labels=['width 0', 'width 1']) stdMeans = np.mean(optPeaks[:, 2:4], axis=0) # Print out the average of the fitted sigmas print('Sigma means [s_0, s_1]: {}'.format(stdMeans)) # Plot the fitted center (relative from the intensity peak) ax2[1].plot(fittingValues[:, 0], 'o') ax2[1].plot(fittingValues[:, 1], 'o') ax2[1].set(title="Gaussian fit relative centers", xlabel='index sorted by peak intensity') _ = ax2[1].legend(labels=['center 0', 'center 1']) ax2[1].set(ylim=(-0.5, 0.5)) ax2[1].set(yticks=(-0.5, -0.25, 0, 0.25, 0.5)) fg.tight_layout()
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Plot to compare the known and fitted coordinates coords are the expected positions we used to generate the image coords_found are the peaks found with full pixel precision optPeaks are the optimized peak positions using Gaussian fitting Zoom in to peaks to see how well the fit worked
fg, ax = plt.subplots(1,1) ax.imshow(simIm) ax.scatter(coords_found[:,1], coords_found[:,0],c='b',marker='o') ax.scatter(optPeaks[:,1], optPeaks[:,0],c='r',marker='x') ax.scatter(coords[:,1], coords[:,0],c='k',marker='+') _ = ax.legend(['integer', 'optimized', 'expected'])
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Find the error in the fitting Gausssian fitting can be heavily influenced by the tails Some error is expected.
# Plot the RMS error for each fitted peak # First sort each set of coordinates to match them err = [] for a, b in zip(coords[np.argsort(coords[:,0]),:], optPeaks[np.argsort(optPeaks[:,0]),0:2]): err.append(np.sqrt(np.sum(a - b)**2)) fg, ax = plt.subplots(1, 1) ax.plot(err) _ = ax.set(xlabel='coorindate', ylabel='RMS error')
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
How does peakFind2D work with the Roll? A very confusing point is the indexing used in meshgrid If you use indexing='ij' then the peak position needs to be plotted in matplotlib backwards (row,col) If you change the meshgrid indexing='xy' then this issue is less confusing BUT.... Default indexing used to be 'ij' when I wrote this (and lots of other) code. So, now I stick with that convention.
# Copy doubleRoll from ncempy.algo.peakFind # to look at the algorithm def doubleRoll(image,vec): return np.roll(np.roll(image, vec[0], axis=0), vec[1], axis=1)
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Create a single 2D Gaussian peak
known_peak = [6, 5] YY, XX = np.meshgrid(range(0,12),range(0,12),indexing='ij') gg = gaussND.gauss2D(XX,YY,known_peak[1], known_peak[0],1,1) gg = np.round(gg,decimals=3) plt.figure() plt.imshow(gg)
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Roll the array 1 pixel in each direction Compare the original and the rolled version The peak will be moved by 1 pixel in each direction in each case Here I ignore the next nearest neighbors (-1,-1) for simplicity. (peakFind.doubleRoll2D does not ignore these). The peak will always be larger than the element-by-element comparison in each roll
# Compare only nearest neighbors roll01 = gg > doubleRoll(gg, [0, 1]) roll10 = gg > doubleRoll(gg, [1, 0]) roll11 = gg > doubleRoll(gg, [1, 1]) roll_1_1 = gg > doubleRoll(gg, [-1, -1]) fg,ax = plt.subplots(2,2) ax[0,0].imshow(roll01) ax[0,1].imshow(roll10) ax[1,0].imshow(roll11) ax[1,1].imshow(roll_1_1) for aa in ax.ravel(): aa.scatter(known_peak[1], known_peak[0]) ax[0,0].legend(['known peak position'])
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Compare each rolled image use logical and to find the pixel which was highest in every comparison The local peak will be the only one left
final = roll01 & roll10 & roll11 & roll_1_1 fg,ax = plt.subplots(1,1) ax.imshow(final) ax.scatter(known_peak[1],known_peak[0])
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Find the peak using where We have a bool array above. np.where will return the elements of the True values which correspond to the peak position(s)
peak_position = np.array(np.where(final)) print(peak_position)
ncempy/notebooks/example_peakFind.ipynb
ercius/openNCEM
gpl-3.0
Mouse B cell We load the hic_data object from the BAM file
reso = 100000 cel1 = 'mouse_B' cel2 = 'mouse_PSC' rep1 = 'rep1' rep2 = 'rep2' hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1), resolution=reso, biases=bias_path.format(cel1, rep1, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel1, rep2), resolution=reso, biases=bias_path.format(cel2, rep2, reso // 1000), ncpus=8)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
We compare the interactions of the two Hi-C matrices at a given distance. The Spearman rank correlation of the matrix diagonals In the plot we represent the Spearman rank correlation of the diagonals of the matrices starting from the main diagonal until the diagonal at 10Mbp.
## this part is to "tune" the plot ## plt.figure(figsize=(9, 6)) axe = plt.subplot() axe.grid() axe.set_xticks(range(0, 55, 5)) axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45) ##################################### spearmans, dists, scc, std = correlate_matrices(hic_data1, hic_data2, max_dist=50, show=True, axe=axe) ## this part is to "tune" the plot ## plt.figure(figsize=(9, 6)) axe = plt.subplot() axe.grid() axe.set_xticks(range(0, 55, 5)) axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45) ##################################### spearmans, dists, scc, std = correlate_matrices(hic_data1, hic_data2, max_dist=50, show=True, axe=axe, normalized=True)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
The SCC score as in HiCrep (see https://doi.org/10.1101/gr.220640.117) is also computed. The value of SCC ranges from −1 to 1 and can be interpreted in a way similar to the standard correlation
print('SCC score: %.4f (+- %.7f)' % (scc, std)) reso = 1000000 hic_data1 = hic_data2 = None hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1), resolution=reso, biases=bias_path.format(cel1, rep1, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel1, rep2), resolution=reso, biases=bias_path.format(cel1, rep2, reso // 1000), ncpus=8)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
The correlation of the eigenvectors Since the eigenvectors of a matrix capture its internal correlations [26], two matrices with highly correlation of eigenvectors are considered to have similar structure. In this case we limit the computation to the first 6 eigenvectors
corrs = eig_correlate_matrices(hic_data1, hic_data2, show=True, aspect='auto', normalized=True) for cor in corrs: print(' '.join(['%5.3f' % (c) for c in cor]) + '\n')
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
The reproducibility score (Q) Computed as in HiC-spector (https://doi.org/10.1093/bioinformatics/btx152), it is also based on comparing eigenvectors. The reproducibility score ranges from 0 (low similarity) to 1 (identity).
reprod = get_reproducibility(hic_data1, hic_data2, num_evec=20, normalized=True, verbose=False) print('Reproducibility score: %.4f' % (reprod))
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
Mouse iPS cell We load the hic_data object from the BAM file
reso = 100000 hic_data1 = hic_data2 = None hic_data1 = load_hic_data_from_bam(base_path.format(cel2, rep1), resolution=reso, biases=bias_path.format(cel2, rep1, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel2, rep2), resolution=reso, biases=bias_path.format(cel2, rep2, reso // 1000), ncpus=8)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
We compare the interactions of the two Hi-C matrices at a given distance. The Spearman rank correlation of the matrix diagonals In the plot we represent the Spearman rank correlation of the diagonals of the matrices starting from the main diagonal until the diagonal at 10Mbp.
## this part is to "tune" the plot ## plt.figure(figsize=(9, 6)) axe = plt.subplot() axe.grid() axe.set_xticks(range(0, 55, 5)) axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45) ##################################### spearmans, dists, scc, std = correlate_matrices(hic_data1, hic_data2, max_dist=50, show=True, axe=axe)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
The SCC score as in HiCrep (see https://doi.org/10.1101/gr.220640.117) is also computed. The value of SCC ranges from −1 to 1 and can be interpreted in a way similar to the standard correlation
print('SCC score: %.4f (+- %.7f)' % (scc, std)) reso = 1000000 hic_data1 = hic_data2 = None hic_data1 = load_hic_data_from_bam(base_path.format(cel2, rep1), resolution=reso, biases=bias_path.format(cel2, rep1, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel2, rep2), resolution=reso, biases=bias_path.format(cel2, rep2, reso // 1000), ncpus=8)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
Comparison between cell types Replicate 1
reso = 100000 hic_data1 = hic_data2 = None hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1), resolution=reso, biases=bias_path.format(cel1, rep1, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel2, rep1), resolution=reso, biases=bias_path.format(cel2, rep1, reso // 1000), ncpus=8) ## this part is to "tune" the plot ## plt.figure(figsize=(9, 6)) axe = plt.subplot() axe.grid() axe.set_xticks(range(0, 55, 5)) axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45) ##################################### spearmans, dists, scc, std = correlate_matrices(hic_data1, hic_data2, max_dist=50, show=True, axe=axe)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
We expect a lower SCC score between different cell types
print('SCC score: %.4f (+- %.7f)' % (scc, std)) reso = 1000000 hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1), resolution=reso, biases=bias_path.format(cel1, rep1, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel2, rep1), resolution=reso, biases=bias_path.format(cel2, rep1, reso // 1000), ncpus=8) corrs = eig_correlate_matrices(hic_data1, hic_data2, show=True, aspect='auto', normalized=True) for cor in corrs: print(' '.join(['%5.3f' % (c) for c in cor]) + '\n') reprod = get_reproducibility(hic_data1, hic_data2, num_evec=20, normalized=True, verbose=False) print('Reproducibility score: %.4f' % (reprod))
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
Replicate 2
reso = 100000 hic_data1 = hic_data2 = None hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep2), resolution=reso, biases=bias_path.format(cel1, rep2, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel2, rep2), resolution=reso, biases=bias_path.format(cel2, rep2, reso // 1000), ncpus=8) ## this part is to "tune" the plot ## plt.figure(figsize=(9, 6)) axe = plt.subplot() axe.grid() axe.set_xticks(range(0, 55, 5)) axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45) ##################################### spearmans, dists, scc, std = correlate_matrices(hic_data1, hic_data2, max_dist=50, show=True, axe=axe) print('SCC score: %.4f (+- %.7f)' % (scc, std)) reso = 1000000 hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep2), resolution=reso, biases=bias_path.format(cel1, rep2, reso // 1000), ncpus=8) hic_data2 = load_hic_data_from_bam(base_path.format(cel2, rep2), resolution=reso, biases=bias_path.format(cel2, rep2, reso // 1000), ncpus=8) corrs = eig_correlate_matrices(hic_data1, hic_data2, show=True, aspect='auto', normalized=True) for cor in corrs: print(' '.join(['%5.3f' % (c) for c in cor]) + '\n') reprod = get_reproducibility(hic_data1, hic_data2, num_evec=20, normalized=True, verbose=False) print('Reproducibility score: %.4f' % (reprod))
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
Merge Hi-C experiments Once agreed that experiments are similar, they can be merged. Here is a simple way to merge valid pairs. Arguably we may want to merge unfiltered data but the difference would be minimal specially with non-replicates.
from pytadbit.mapping import merge_bams ! mkdir -p results/fragment/mouse_B_both/ ! mkdir -p results/fragment/mouse_PSC_both/ ! mkdir -p results/fragment/mouse_B_both/03_filtering/ ! mkdir -p results/fragment/mouse_PSC_both/03_filtering/ cell = 'mouse_B' rep1 = 'rep1' rep2 = 'rep2' hic_data1 = 'results/fragment/{0}_{1}/03_filtering/valid_reads12_{0}_{1}.bam'.format(cell, rep1) hic_data2 = 'results/fragment/{0}_{1}/03_filtering/valid_reads12_{0}_{1}.bam'.format(cell, rep2) hic_data = 'results/fragment/{0}_both/03_filtering/valid_reads12_{0}.bam'.format(cell) merge_bams(hic_data1, hic_data2, hic_data) cell = 'mouse_PSC' rep1 = 'rep1' rep2 = 'rep2' hic_data1 = 'results/fragment/{0}_{1}/03_filtering/valid_reads12_{0}_{1}.bam'.format(cell, rep1) hic_data2 = 'results/fragment/{0}_{1}/03_filtering/valid_reads12_{0}_{1}.bam'.format(cell, rep2) hic_data = 'results/fragment/{0}_both/03_filtering/valid_reads12_{0}.bam'.format(cell) merge_bams(hic_data1, hic_data2, hic_data)
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
Normalizing merged data
from pytadbit.mapping.analyze import hic_map ! mkdir -p results/fragment/mouse_B_both/04_normalizing ! mkdir -p results/fragment/mouse_PSC_both/04_normalizing
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
All in one loop to: - filter - normalize - generate intra-chromosome and genomic matrices All datasets are analysed at various resolutions.
for cell in ['mouse_B','mouse_PSC']: print(' -', cell) for reso in [1000000, 200000, 100000]: print(' *', reso) # load hic_data hic_data = load_hic_data_from_bam( 'results/fragment/{0}_both/03_filtering/valid_reads12_{0}.bam'.format(cell), reso) # filter columns hic_data.filter_columns(draw_hist=False, min_count=10, by_mean=True) # normalize hic_data.normalize_hic(iterations=0) # save biases to reconstruct normalization hic_data.save_biases('results/fragment/{0}_both/04_normalizing/biases_{0}_both_{1}kb.biases'.format(cell, reso // 1000)) # save data as raw matrix per chromsome hic_map(hic_data, by_chrom='intra', normalized=False, savedata='results/fragment/{1}_both/04_normalizing/{0}_raw'.format(reso, cell)) # save data as normalized matrix per chromosome hic_map(hic_data, by_chrom='intra', normalized=True, savedata='results/fragment/{1}_both/04_normalizing/{0}_norm'.format(reso, cell)) # if the resolution is low save the full genomic matrix if reso > 500000: hic_map(hic_data, by_chrom=False, normalized=False, savefig ='results/fragment/{1}_both/04_normalizing/{0}_raw.png'.format(reso, cell), savedata='results/fragment/{1}_both/04_normalizing/{0}_raw.mat'.format(reso, cell)) hic_map(hic_data, by_chrom=False, normalized=True, savefig ='results/fragment/{1}_both/04_normalizing/{0}_norm.png'.format(reso, cell) , savedata='results/fragment/{1}_both/04_normalizing/{0}_norm.mat'.format(reso, cell))
doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb
3DGenomes/tadbit
gpl-3.0
We begin by importing the usual libraries, setting up a very simple dataloader, and generating a toy dataset of spirals.
def dataloader(arrays, batch_size, *, key): dataset_size = arrays[0].shape[0] assert all(array.shape[0] == dataset_size for array in arrays) indices = jnp.arange(dataset_size) while True: perm = jrandom.permutation(key, indices) (key,) = jrandom.split(key, 1) start = 0 end = batch_size while end < dataset_size: batch_perm = perm[start:end] yield tuple(array[batch_perm] for array in arrays) start = end end = start + batch_size def get_data(dataset_size, *, key): t = jnp.linspace(0, 2 * math.pi, 16) offset = jrandom.uniform(key, (dataset_size, 1), minval=0, maxval=2 * math.pi) x1 = jnp.sin(t + offset) / (1 + t) x2 = jnp.cos(t + offset) / (1 + t) y = jnp.ones((dataset_size, 1)) half_dataset_size = dataset_size // 2 x1 = x1.at[:half_dataset_size].multiply(-1) y = y.at[:half_dataset_size].set(0) x = jnp.stack([x1, x2], axis=-1) return x, y
examples/train_rnn.ipynb
patrick-kidger/equinox
apache-2.0
Now for our model. Purely by way of example, we handle the final adding on of bias ourselves, rather than letting the linear layer do it. This is just so we can demonstrate how to use custom parameters in models.
class RNN(eqx.Module): hidden_size: int cell: eqx.Module linear: eqx.nn.Linear bias: jnp.ndarray def __init__(self, in_size, out_size, hidden_size, *, key): ckey, lkey = jrandom.split(key) self.hidden_size = hidden_size self.cell = eqx.nn.GRUCell(in_size, hidden_size, key=ckey) self.linear = eqx.nn.Linear(hidden_size, out_size, use_bias=False, key=lkey) self.bias = jnp.zeros(out_size) def __call__(self, input): hidden = jnp.zeros((self.hidden_size,)) def f(carry, inp): return self.cell(inp, carry), None out, _ = lax.scan(f, hidden, input) # sigmoid because we're performing binary classification return jax.nn.sigmoid(self.linear(out) + self.bias)
examples/train_rnn.ipynb
patrick-kidger/equinox
apache-2.0
And finally the training loop.
def main( dataset_size=10000, batch_size=32, learning_rate=3e-3, steps=200, hidden_size=16, depth=1, seed=5678, ): data_key, loader_key, model_key = jrandom.split(jrandom.PRNGKey(seed), 3) xs, ys = get_data(dataset_size, key=data_key) iter_data = dataloader((xs, ys), batch_size, key=loader_key) model = RNN(in_size=2, out_size=1, hidden_size=hidden_size, key=model_key) @eqx.filter_value_and_grad def compute_loss(model, x, y): pred_y = jax.vmap(model)(x) # Trains with respect to binary cross-entropy return -jnp.mean(y * jnp.log(pred_y) + (1 - y) * jnp.log(1 - pred_y)) # Important for efficiency whenever you use JAX: wrap everything into a single JIT # region. @eqx.filter_jit def make_step(model, x, y, opt_state): loss, grads = compute_loss(model, x, y) updates, opt_state = optim.update(grads, opt_state) model = eqx.apply_updates(model, updates) return loss, model, opt_state optim = optax.adam(learning_rate) opt_state = optim.init(model) for step, (x, y) in zip(range(steps), iter_data): loss, model, opt_state = make_step(model, x, y, opt_state) loss = loss.item() print(f"step={step}, loss={loss}") pred_ys = jax.vmap(model)(xs) num_correct = jnp.sum((pred_ys > 0.5) == ys) final_accuracy = (num_correct / dataset_size).item() print(f"final_accuracy={final_accuracy}")
examples/train_rnn.ipynb
patrick-kidger/equinox
apache-2.0
eqx.filter_value_and_grad will calculate the gradient with respect to the first argument (model). By default it will calculate gradients for all the floating-point JAX arrays and ignore everything else. For example the model parameters will be differentiated, whilst model.hidden_size is an integer and will be left alone. If you need finer control then these defaults can be adjusted; see [equinox.filter_grad][] and [equinox.filter_value_and_grad][]. Likewise, by default, eqx.filter_jit will look at all the arguments passed to make_step, and automatically JIT-trace every array and JIT-static everything else. For example the model parameters and the data x and y will be traced, whilst model.hidden_size is an integer and will be static'd instead. Once again if you need finer control then these defaults can be adjusted; see [equinox.filter_jit][].
main() # All right, let's run the code.
examples/train_rnn.ipynb
patrick-kidger/equinox
apache-2.0
Loading Model Results First, we need to find the list of all directories in our model output folder from the 001-storing-model-results notebook. We can do this using the glob and os modules, which will allow us to work with directories and list their contents.
import os # Using os.listdir to show the current directory os.listdir("./") # Using os.listdir to show the output directory os.listdir("output")[0:5] import glob # Using glob to list the output directory glob.glob("output/run-*")[0:5]
notebooks/basic-stats/002-reading-model-results.ipynb
mjbommar/cscs-530-w2016
bsd-2-clause
Using os.path.join and os.path.basename We can also create paths and navigate directory trees using os.path.join. This method helps build file and directory paths, like we see below.
run_directory = os.listdir("output")[0] print(run_directory) print(os.path.join(run_directory, "parameters.csv")) print(run_directory) print(os.path.basename(run_directory))
notebooks/basic-stats/002-reading-model-results.ipynb
mjbommar/cscs-530-w2016
bsd-2-clause
Iterating through model run directories Next, once we are able to "find" all model run directories, we need to iterate through them and read all data from them. In the cells, we create data frames for each CSV output file from out 001-storing-model-results notebook.
# Create "complete" data frames run_data = [] all_timeseries_data = pandas.DataFrame() all_interaction_data = pandas.DataFrame() # Iterate over all directories for run_directory in glob.glob("output/run*"): # Get the run ID from our directory name run_id = os.path.basename(run_directory) # Load parameter and reshape run_parameter_data = pandas.read_csv(os.path.join(run_directory, "parameters.csv")) run_parameter_data.index = run_parameter_data["parameter"] # Load timeseries and interactions run_interaction_data = pandas.read_csv(os.path.join(run_directory, "interactions.csv")) run_interaction_data["run"] = run_id run_ts_data = pandas.read_csv(os.path.join(run_directory, "timeseries.csv")) run_ts_data["run"] = run_id # Flatten parameter data into interaction and TS data for parameter_name in run_parameter_data.index: run_ts_data.loc[:, parameter_name] = run_parameter_data.loc[parameter_name, "value"] if run_interaction_data.shape[0] > 0: for parameter_name in run_parameter_data.index: run_interaction_data.loc[:, parameter_name] = run_parameter_data.loc[parameter_name, "value"] # Store raw run data run_data.append({"parameters": run_parameter_data, "interactions": run_interaction_data, "timeseries": run_ts_data}) # Update final steps all_timeseries_data = all_timeseries_data.append(run_ts_data) all_interaction_data = all_interaction_data.append(run_interaction_data) # let's see how many records we have. print(all_timeseries_data.shape) print(all_interaction_data.shape) # Let's see what the data looks like. all_timeseries_data.head() all_interaction_data.head() %matplotlib inline # let's use groupby to find some information. last_step_data = all_timeseries_data.groupby("run").tail(1) # Simple plot f = plt.figure() plt.scatter(last_step_data["min_subsidy"], last_step_data["num_infected"], alpha=0.5) plt.xlabel("Subsidy") plt.ylabel("Number infected") plt.title("Subsidy vs. number infected") # Let's use groupby with **multiple** variables now. mean_infected_by_subsidy = all_timeseries_data.groupby(["run", "min_subsidy", "min_prob_hookup"])["num_infected"].mean() std_infected_by_subsidy = all_timeseries_data.groupby(["run", "min_subsidy", "min_prob_hookup"])["num_infected"].std() infected_by_subsidy = pandas.concat((mean_infected_by_subsidy, std_infected_by_subsidy), axis=1) infected_by_subsidy.columns = ["mean", "std"] infected_by_subsidy.head() # Plot a distribution f = plt.figure() _ = plt.hist(last_step_data["num_infected"].values, color="red", alpha=0.5) plt.xlabel("Number infected") plt.ylabel("Frequency") plt.title("Distribution of number infected") # Perform distribution tests for no subsidy vs. some subsidy no_subsidy_data = last_step_data.loc[last_step_data["min_subsidy"] == 0, "num_infected"] some_subsidy_data = last_step_data.loc[last_step_data["min_subsidy"] > 0, "num_infected"] # Plot a distribution f = plt.figure() _ = plt.hist(no_subsidy_data.values, color="red", alpha=0.25) _ = plt.hist(some_subsidy_data.values, color="blue", alpha=0.25) plt.xlabel("Number infected") plt.ylabel("Frequency") plt.title("Distribution of number infected") # Test for normality print(scipy.stats.shapiro(no_subsidy_data)) print(scipy.stats.shapiro(some_subsidy_data)) # Test for equal variances print(scipy.stats.levene(no_subsidy_data, some_subsidy_data)) # Perform t-test print(scipy.stats.ttest_ind(no_subsidy_data, some_subsidy_data)) # Perform rank-sum test print(scipy.stats.ranksums(no_subsidy_data, some_subsidy_data))
notebooks/basic-stats/002-reading-model-results.ipynb
mjbommar/cscs-530-w2016
bsd-2-clause
This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials.
forecast_data['temp_air'].plot();
docs/tutorials/forecast_to_power.ipynb
cwhanse/pvlib-python
bsd-3-clause
Plot the GHI data. Most pvlib forecast models derive this data from the weather models' cloud clover data.
ghi = forecast_data['ghi'] ghi.plot() plt.ylabel('Irradiance ($W/m^{-2}$)');
docs/tutorials/forecast_to_power.ipynb
cwhanse/pvlib-python
bsd-3-clause
Note that AOI has values greater than 90 deg. This is ok. POA total Calculate POA irradiance
poa_irrad = irradiance.poa_components(aoi, forecast_data['dni'], poa_sky_diffuse, poa_ground_diffuse) poa_irrad.plot() plt.ylabel('Irradiance ($W/m^{-2}$)') plt.title('POA Irradiance');
docs/tutorials/forecast_to_power.ipynb
cwhanse/pvlib-python
bsd-3-clause
Cell temperature Calculate pv cell temperature
ambient_temperature = forecast_data['temp_air'] wnd_spd = forecast_data['wind_speed'] thermal_params = temperature.TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_polymer'] pvtemp = temperature.sapm_cell(poa_irrad['poa_global'], ambient_temperature, wnd_spd, **thermal_params) pvtemp.plot() plt.ylabel('Temperature (C)');
docs/tutorials/forecast_to_power.ipynb
cwhanse/pvlib-python
bsd-3-clause
Run the SAPM using the parameters we calculated above.
effective_irradiance = pvsystem.sapm_effective_irradiance(poa_irrad.poa_direct, poa_irrad.poa_diffuse, airmass, aoi, sandia_module) sapm_out = pvsystem.sapm(effective_irradiance, pvtemp, sandia_module) #print(sapm_out.head()) sapm_out[['p_mp']].plot() plt.ylabel('DC Power (W)');
docs/tutorials/forecast_to_power.ipynb
cwhanse/pvlib-python
bsd-3-clause
Choose a particular inverter
sapm_inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208__208V_'] sapm_inverter p_ac = inverter.sandia(sapm_out.v_mp, sapm_out.p_mp, sapm_inverter) p_ac.plot() plt.ylabel('AC Power (W)') plt.ylim(0, None);
docs/tutorials/forecast_to_power.ipynb
cwhanse/pvlib-python
bsd-3-clause
Plot just a few days.
p_ac[start:start+pd.Timedelta(days=2)].plot();
docs/tutorials/forecast_to_power.ipynb
cwhanse/pvlib-python
bsd-3-clause
ToDo Your Network Summary Network source and preprocessing Node/Edge attributes Size, Order Gorgeous network layout. Try to show that your network has some structure, play with node sizes and colors, scaling parameters, tools like Gephi may be useful here Degree distribution, Diameter, Clustering Coefficient Structural Analysis Degree/Closeness/Betweenness centralities. Top nodes interpretation Page-Rank. Comparison with centralities Assortative Mixing according to node attributes Node structural equivalence/similarity Community Detection Clique search Best results of various community detection algorithms, both in terms of interpretation and some quality criterion. Since Networkx has no community detection algorithms, use additional modules e.g. igraph, communities, graph-tool, etc The results should be visible on the network layout or adjacency matrix picture <center>Structural Analysis and Visualization of Networks</center> <center>Analysis of facebook graph</center> <center>Student: Nazarov Ivan</center> Summary Network source This graph shows friend relationships among the people in mu facebook friends list. The newtork was obtained by Netviz facebook app. A purely technical step, but prior to loading with the networkx procedure $\text{read_gml}(\cdot)$ the GML-file was preprocessd to convert UTF-8 encoding into special HTML entities. In fact the problem seems to be rooted in the software used to crawl the facebook network. Attributes The nodes have a short list of attributes which are * gender; * number of posts on the wall; * locale, which represents the language setting of that nodes's facebook page. The network does not have any edge attrbiutes
G = nx.read_gml( path = "./data/ha5/huge_100004196072232_2015_03_24_11_20_1d58b0ecdf7713656ebbf1a177e81fab.gml", relabel = False )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
The order of a network $G=(V,E)$ is $|V|$ and the size is $|E|$.
print "The network G is of the order %d. Its size is %d." % ( G.number_of_nodes( ), G.number_of_edges( ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Visualisation It is always good to have a nice and attractive picture in a study.
deg = G.degree( ) fig = plt.figure( figsize = (12,8) ) axs = fig.add_subplot( 1,1,1, axisbg = 'black' ) nx.draw_networkx( G, with_labels = False, ax = axs, cmap = plt.cm.Purples, node_color = deg.values( ), edge_color = "magenta", nodelist = deg.keys( ), node_size = [ 100 * np.log( d + 1 ) for d in deg.values( ) ], pos = nx.fruchterman_reingold_layout( G ), )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Let's have a look at connected components, since the plot suggests, that the graph is not connected.
CC = sorted( nx.connected_components( G ), key = len, reverse = True ) for i, c in enumerate( CC, 1 ): row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] ) print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
The largest community connected component represents family, my acquaintances at shool ($\leq 2003$) and in university ($2003-2009$) and the second largest component are people I met at Oxford Royale Summer School in 2012. The one-node are either old acquaintances, select colleagues from work, instructors et c. Since the largest component is an order of magnitude larger than hte next biggest, I decide to focus just on it, rather than the whole network. In fact this convers almost $\frac{91}{121}\approx 75\%$ of vertices, and $\frac{1030}{1091} \approx 94\%$ of edges.
H = G.subgraph( CC[ 0 ] ) print "The largest component is of the order %d. Its size is %d." % ( H.number_of_nodes( ), H.number_of_edges( ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Let's plot the subgraph and study the its degree distribution.
deg = H.degree( ) fig = plt.figure( figsize = (16, 6) ) axs = fig.add_subplot( 1,2,1, axisbg = 'black', title = "Master cluster", ) pos = nx.fruchterman_reingold_layout( H ) nx.draw_networkx( H, with_labels = False, ax = axs, cmap = plt.cm.Oranges, node_color = deg.values( ), edge_color = "cyan", nodelist = deg.keys( ), node_size = [ d * 10 for d in deg.values( ) ], pos = pos ) ## Degree distribution v, f = np.unique( nx.degree( H ).values( ), return_counts = True) axs = fig.add_subplot( 1,2,2, xlabel = "Degree", ylabel = "Frequency", title = "Node degree frequency" ) axs.plot( v, f, "ob" )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Degree distribution A useful tool for exploring the tail behaviour of sample is the Mean Excess plot, defined as the $$M(u) = \mathbb{E}\Big(\Big. X-u\,\big.\big\rvert\,X\geq u \Big.\Big)$$ of which the emprical counterpart is $$\hat{M}(u) = {\Big(\sum_{i=1}^n 1_{x_i\geq u}\Big)^{-1}}\sum_{i=1}^n (x_i-u) 1_{x_i\geq u}$$ The key properties of $M(u)$ are * it steadily increases for power-law tails and the steeper the slope the smaller is the exponent; * it levels for exponential tails (heurstically: the case when $\alpha\to \infty$ is similar to an exponential tail); * it decays towards zero for a tail of a compactly supported distribution. When dealing with the empirical mean-excesses one looks for the trend in the large thresholds to discern behaviour, necessarily bearing in mind that in that region the varinace of the $\hat{M}(u)$ grows.
from scipy.stats import rankdata def mean_excess( data ) : data = np.array( sorted( data, reverse = True ) ) ranks = rankdata( data, method = 'max' ) excesses = np.array( np.unique( len( data ) - ranks ), dtype = np.int ) thresholds = data[ excesses ] mean_excess = np.cumsum( data )[ excesses ] / ( excesses + 0.0 ) - thresholds return thresholds, mean_excess plt.figure( figsize = ( 8, 6 ) ) u, m = mean_excess( H.degree().values() ) plt.plot( u, m, lw = 2 ) plt.title( "Mean Excess polt of node-degree" ) plt.xlabel( "Threshold" ) plt.ylabel( "Expected excess over the threshold")
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
The Mean Excess plot does seems to indicate that the node degree does not follow a scale free distribution. Indeed, the plot levels off as ita approaches the value $50$. The rightmost spike is in the region where the variance of the estimate of the conditional expectation is extremely high, which is why this artefact of finite sample may be ignored. Clustering tightness The average clustering coefficient of a graph $G=(V,E)$ is defined by the following formula : $$\bar{c} = \frac{1}{n}\sum_{x\in V}c_x$$ where $n=|V|$ and $c_x$ is the local clustering coefficient of vertex $x\in V$ defined below. The local (trinagular) clustering coefficient of a node $x\in V$ is defined as the ratio of the number of unique edge triangles containing $x$ to the number of unique triangles a vertex has in a complete graph of order $\delta_x$ -- the degree of $x$ in $G$. The expression for $c_x$ is $$c_x = \frac{1}{\delta_x (\delta_x-1)} \sum_{u\neq x} \sum_{v\neq x,u} 1_{xu} 1_{uv} 1_{vx} = \frac{1}{\delta_x (\delta_x-1)} #_{x}$$ where $1_{ij}$ is the indicator equal to $1$ if the edge (undirected) $(i,j)\in E$ and $0$ otherwise.
print "This subgraph's clustering coefficient is %.3f." % nx.average_clustering( H ) print "This subgraph's average shortest path length is %.3f." % nx.average_shortest_path_length( H ) print "The radius (maximal distance) is %d." % nx.radius( H )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
The clustering coefficient is moderately high and any two members in this component are 2 hops away from each other on average. This means that this subgraph has a tightly knit cluster structure, almost a like small world, were it not for the light-tailed degreee distribution. Structural analysis Centrality measures Degree The degree centrality measure of a node $v\in V$ in graph $G=\big(V, E\big)$ is the sum of all edges incident on it: $$C_v = \sum_{u\in V} 1_{(v,u)\in E} = \sum_{u\in V} A_{vu} = \delta_v$$ In other words the more 1st-tier (nearest, reachable in one hop) negihbours a vertex has the higher its centrality is. Betweenness This measure assesses how important a node is in terms of the global graph connectivity: $$C_B(v) = \sum_{s\neq v\neq t\in V} \frac{\sigma_{st}(v)}{\sigma_{st}}$$ where $\sigma_{st}(v)$ is the number of shortest paths from $s$ to $t$ passing through $v$, while $\sigma_{st}$ is the total number of paths of least legnth connecting $s$ and $t$. High local centrality means that a node is in direct contact with many other nodes, whereas low centrality indicates a periphrial vertex. Alogn with these local measures, compute the centrality closeness and the PageRank ranking.
pr = nx.pagerank_numpy( H, alpha = 0.85 ) cb = nx.centrality.betweenness_centrality( H ) cc = nx.centrality.closeness_centrality( H ) cd = nx.centrality.degree_centrality( H )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
the Mixing coefficient The mixing coefficient for a numerical node attribute $X = \big(x_i\big)$ in an undirected graph $G$, with the adjacency matrix $A$, is defined as $$\rho(x) = \frac{\text{cov}}{\text{var}} = \frac{\sum_{ij}A_{ij}(x_i-\bar{x})(x_j-\bar{x})}{\sum_{ij}A_{ij}(x_i-\bar{x})^2} $$ where $\bar{x} = \frac{1}{2m}\sum_i \delta_i x_i$ is the mean value of $X$ weighted by vertex degree. Note that $A$ is necessarily symmetric. This coefficient can be represented in the matrix notation as $$\rho(x) = \frac{X'AX - 2m \bar{x}^2}{X'\text{diag}(D)X - 2m \bar{x}^2} $$ where the diagonal matrix $\text{diag}(D)$ is the matrix of vertex degrees, and the value $\bar{x}$ is the sample mean of the numerical node attribute $X$.
def assortativity( G, X ) : ## represent the graph in an adjacency matrix form A = nx.to_numpy_matrix( G, dtype = np.float, nodelist = G.nodes( ) ) ## Convert x -- dictionary to a numpy vector x = np.array( [ X[ n ] for n in G.nodes( ) ] , dtype = np.float ) ## Compute the x'Ax part xAx = np.dot( x, np.array( A.dot( x ) ).flatten( ) ) ## and the x'\text{diag}(D)x part. Note that left-multiplying a vector ## by a diagonal matrix is equivalent to element-wise multiplication. D = np.array( A.sum( axis = 1 ), dtype = np.float ).flatten( ) xDx = np.dot( x, np.multiply( D, x ) ) ## numpy.average( ) actually normalizes the weights. x_bar = np.average( x, weights = D ) D_sum = np.sum( D, dtype = np.float ) return ( xAx - D_sum * x_bar * x_bar ) / ( xDx - D_sum * x_bar * x_bar )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Let's compute the assortativity for the centralities, pagerank vector, vertex degrees and node attributes.
print "PageRank assortativity coefficient: %.3f" % assortativity( H, nx.pagerank_numpy( H, alpha = 0.85 ) ) print "Betweenness centrality assortativity coefficient: %.3f" % assortativity( H, nx.centrality.betweenness_centrality( H ) ) print "Closenesss centrality assortativity coefficient: %.3f" % assortativity( H, nx.centrality.closeness_centrality( H ) ) print "Degree assortativity coefficient: %.3f" % assortativity( H, nx.centrality.degree_centrality( H ) ) print "Gender assortativity coefficient: %.3f" % nx.assortativity.attribute_assortativity_coefficient( H, 'sex' ) print "Agerank assortativity coefficient: %.3f" % assortativity( H, nx.get_node_attributes( H, 'agerank') ) print "Language assortativity coefficient: %.3f" % nx.assortativity.attribute_assortativity_coefficient( H, 'locale' ) print "Number of posts on the wall assortativity coefficient: %.3f" % nx.assortativity.attribute_assortativity_coefficient( H, 'wallcount' )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
This component does not show segregation patterns in connectivity, as the computed coefficinets do indicate that neither that "opposites", nor that "kindred spritis" attach. The noticably high values of degree centrality is probably due to the component already having a tight cluster structure. Node Rankings It is sometimes interesting to look at a table representation of a symmetric distance matrix. The procedure below prints a matrix in a more straightforward format.
## Print the upper triangle of a symmetric matrix in reverse column order def show_symmetric_matrix( A, labels, diag = False ) : d = 0 if diag else 1 c = len( labels ) - d print "\t", "\t".join( c * [ "%.3s" ] ) % tuple( labels[ d: ][ ::-1 ] ) for i, l in enumerate( labels if diag else labels[ :-1 ] ) : print ( ( "%4s\t" % l ) + "\t".join( ( c - i ) * [ "%.3f" ] ) % tuple( rank_dist[ i,i+d: ][ ::-1 ] ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
It actually interesting, to compare the ordering produced by different vertex-ranking algorithms. The most direct way is to analyse pariwise Spearman's $\rho$, since it compares the rank-transformation of one vector of observed data to another.
from scipy.spatial.distance import pdist, squareform from scipy.stats import spearmanr as rho labels = [ 'btw', 'deg', 'cls', 'prk' ] align = lambda dd : np.array( [ dd[ n ] for n in H.nodes( ) ], dtype = np.float ) rank_dist = squareform( pdist( [ align( cb ), align( cd ), align( cc ), align( pr ) ], metric = lambda a, b : rho(a,b)[0] ) ) show_symmetric_matrix( rank_dist, labels )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
The rankings match each other very closely! Commutnity detection A $k$-clique commutniy detection method considers a set of nodes a community if its maximal clique is of order $k$, all nodes are parto of at least one $k$-clique and all $k$-cliques overlap by at least $k-1$ vertrices.
kcq = list( nx.community.k_clique_communities( H, 3 ) )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Label propagation algorithm, initially assigns unique labels to each node, and the relabels nodes in random order until stabilization. New label corresponds to the label, which the largest number of neighbours has. Code borrowed from lpa.py by Tyler Rush, which can be found at networkx-devel. The procedure is an implementation of the idea in: * Cordasco, G., & Gargano, L. (2012). Label propagation algorithm: a semi-synchronous approach. International Journal of Social Network Mining, 1(1), 3-26.
import lpa lab = lpa.semisynchronous_prec_max( H )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Markov Cluster Algorithm (MCL). Input: Transition matrix $T = D^{-1}A$ Output: Adjacency matrix $M^$ 1. Set $M = T$ 2. repeat: 3. Expansion Step: $M = M^p$ (usually $p=2$) 4. Inflation Step: Raise every entry of $M$ to the power $\alpha$ (usualy $\alpha=2$) 5. Renormalize: Normalize each row by its sum 6. Prunning: Replace entries that are close to $0$ by pure $0$ 7. until $M$ converges 8. $M^ = M$
def mcl_iter( A, p = 2, alpha = 2, theta = 1e-8, rel_eps = 1e-4, niter = 10000 ) : ## Convert A into a transition kernel: M_{ij} is the probability of making a transition from i to j. M = np.multiply( 1.0 / A.sum( axis = 1, dtype = np.float64 ).reshape(-1,1), A ) i = 0 ; status = -1 while i < niter : M_prime = M.copy( ) ## Expansion step: M_{ij} is the probability of reaching a vertex j from i in p hops. M = np.linalg.matrix_power( M, p ) ## Pruning: make paths with low transition probability into almost surely unused. M[ np.abs( M ) < theta ] = 0 ## Inflation step: dampen the probabilites M = np.power( M, alpha ) ## Renormalisation step: make the matrix into a stochastic transition kernel N = M.sum( axis = 1, dtype = np.float64 ) ## If a nan is encountered, then abort if np.any( np.isnan( N ) ) : status = -2 break M = np.multiply( 1.0 / N.reshape(-1,1), M ) ## Convergence criterion is the L1 norm of relative divergence of transition probabilities if np.sum( np.abs( M - M_prime ) / ( np.abs( M_prime ) + rel_eps ) ) < rel_eps : status = 0 break ## Advance to the next iteration i += 1 return ( M, (status, i) ) def extract_communities( M, lengths = True ) : ## It is extected that the MCL matrix detects communities in columns C = list( ) ; i0 = 0 if np.any( np.isnan( M ) ) : return C ## Find all indices of nonzero elements r, c = np.where( np.array( M ) ) ## Sort them by the column index and find the community sizes r = r[ np.argsort( c ) ] u = np.unique( c, return_counts = True ) if np.sum( u[ 1 ] ) > M.shape[ 1 ] : return C if lengths : return u[ 1 ] ## Columns indices of nonzero entries are ordered, so we just need to ## sweep across the sizes for s in u[ 1 ] : ## Row indices for a column with a nonzero element are the indices of ## nodes in the community. list.append( C, r[ i0:i0+s ] ) i0 += s return C def make_labels( com, mapper = None ) : dd = dict( ) for i, c in enumerate( com, 1 ) : for k in c : if mapper is not None : dd[ mapper[ k ] ] = i else : dd[ k ] = i return dd
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Let's check how the Markov Clustering Algorithm fares against $k$-clique, and vertex labelling.
fig = plt.figure( figsize = (12, 8) ) axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster", ) A = nx.to_numpy_matrix( H, dtype = np.float, nodelist = nx.spectral_ordering( H ) ) C, _ = mcl_iter( A ) mcl = extract_communities( C, lengths = False) axs.spy( A, color = "gold", markersize = 15, marker = '.' ) axs.spy( C, color = "magenta", markersize = 10, marker = '.' ) for i, c in enumerate( kcq, 1 ): row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] ) print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" ) fig = plt.figure( figsize = (12, 8) ) axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: 5-clique community", ) kcq = list( nx.community.k_clique_communities( H, 5 ) ) deg = make_labels( kcq ) nx.draw_networkx( H, with_labels = False, ax = axs, cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan", nodelist = deg.keys( ), node_size = 200, pos = pos ) for i, c in enumerate( kcq, 1 ): row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] ) print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" ) fig = plt.figure( figsize = (12, 8) ) axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: 7-clique community", ) kcq = list( nx.community.k_clique_communities( H, 7 ) ) deg = make_labels( kcq ) nx.draw_networkx( H, with_labels = False, ax = axs, cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan", nodelist = deg.keys( ), node_size = 200, pos = pos ) for i, c in enumerate( kcq, 1 ): row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] ) print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" ) fig = plt.figure( figsize = (12, 8) ) axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: 4-clique communitites", ) kcq = list( nx.community.k_clique_communities( H, 4 ) ) deg = make_labels( kcq ) nx.draw_networkx( H, with_labels = False, ax = axs, cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan", nodelist = deg.keys( ), node_size = 200, pos = pos ) for i, c in enumerate( kcq, 1 ): row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] ) print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" ) fig = plt.figure( figsize = (12, 8) ) axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: label propagation", ) deg = make_labels( lab.values() ) nx.draw_networkx( H, with_labels = False, ax = axs, cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan", nodelist = deg.keys( ), node_size = 200, pos = pos ) for i, c in enumerate( lab.values(), 1 ): row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] ) print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" ) fig = plt.figure( figsize = (12, 8) ) axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster: Markov Clustering", ) mcl = extract_communities( mcl_iter( nx.to_numpy_matrix( H, dtype = np.float ), p = 2, alpha = 2 )[ 0 ], lengths = False) deg = make_labels( mcl, mapper = H.nodes() ) nx.draw_networkx( H, with_labels = False, ax = axs, cmap = plt.cm.Reds, node_color = deg.values( ), edge_color = "cyan", nodelist = deg.keys( ), node_size = 200, pos = pos ) for i, c in enumerate( mcl, 1 ): row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] ) print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" )
year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb
ivannz/study_notes
mit
Quiz Question: What is the Euclidean distance between the query house and the 10th house of the training set?
print(features_test[0]) print(features_train[9]) import math def get_distance(vec1, vec2): return math.sqrt(np.sum((vec1 - vec2)**2)) get_distance(features_test[0], features_train[9])
ml-regression/week 6/K-NN.ipynb
isendel/machine-learning
apache-2.0
Quiz Question: Among the first 10 training houses, which house is the closest to the query house?
min_distance = None closest_house = None for i, train_house in enumerate(features_train[0:10]): dist = get_distance(features_test[0], train_house) if i == 0 or dist < min_distance: min_distance = dist closest_house = i print(min_distance) print(closest_house) diff = features_train - features_test[0] np.sum(diff[-1], axis=0) dist = np.sqrt(np.sum(diff**2, axis=1)) dist[100] def compute_distances(features_instances, features_query): diff = features_instances - features_query distances = np.sqrt(np.sum(diff**2, axis=1)) return distances
ml-regression/week 6/K-NN.ipynb
isendel/machine-learning
apache-2.0
17. Quiz Question: What is the predicted value of the query house based on 1-nearest neighbor regression?
distances = compute_distances(features_train, features_test[2]) print(distances) print(np.argmin(distances)) np.where(distances == min(distances)) distances[1149] def k_nearest_neighbors(k, feature_train, features_query): distances = compute_distances(features_train, features_query) return distances, np.argsort(distances)[:k] distances, neighbours = k_nearest_neighbors(4, features_train, features_test[2]) for n in neighbours: print(distances[n]) print(neighbours) print(neighbours) def predict_output_of_query(k, features_train, output_train, features_query): distances, neighbours = k_nearest_neighbors(k, features_train, features_query) prediction = output_train[neighbours].mean() return prediction predict_output_of_query(1, features_train, output_train, features_test[2]) predict_output_of_query(4, features_train, output_train, features_test[2]) print(output_test[2]) def predict_output(k, features_train, output_train, features_query): #distances, neighbours = k_nearest_neighbors(k, features_train, features_query) predictions = np.zeros((features_query.shape[0], 1)) for i in range(features_query.shape[0]): predictions[i,0] = predict_output_of_query(k,features_train, output_train, features_query[i]) return predictions predictions = predict_output(10, features_train, output_train, features_test[:10]) print(predictions) print(np.argmin(predictions)) print(output_test[:10]) rsss = [] for k in range(1,16): predictions = predict_output(k, features_train, output_train, features_valid) error = predictions - output_valid rss = error.T.dot(error) print('RSS for k=%s: %s' % (k, rss)) rsss.append(rss) predictions = predict_output(3, features_train, output_train, features_test) error = predictions - output_test rss = error.T.dot(error) print(rss)
ml-regression/week 6/K-NN.ipynb
isendel/machine-learning
apache-2.0
Introduction to networkx Network Basics Networks, a.k.a. graphs, are an immensely useful modeling tool to model complex relational problems. Networks are comprised of two main entities: Nodes: commonly represented as circles. In the academic literature, nodes are also known as "vertices". Edges: commonly represented as lines between circles. Another way to think to it is, nodes are things you are interested in and edges denote the relationships between the things that you are interested in. Thus investigating a graph's edges is the more interesting part of network/graph analysis. In a network, if two nodes are joined together by an edge, then they are neighbors of one another. There are generally two types of networks - directed and undirected. In undirected networks, edges do not have a directionality associated with them. In directed networks, they do. Examples: Facebook's network: Individuals are nodes, edges are drawn between individuals who are FB friends with one another. undirected network. Air traffic network: Airports are nodes, flights between airports are the edges. directed network. The key questions here are as follows. How do we: Model a problem as a network? Extract useful information from a network? networkx quickstart In the networkx implementation, graph objects store their data in dictionaries. Nodes are part of the attribute Graph.node, which is a dictionary where the key is the node ID and the values are a dictionary of attributes. Edges are part of the attribute Graph.edge, which is a nested dictionary. Data are accessed as such: G.edge[node1][node2]['attr_name']. Because of the dictionary implementation of the graph, any hashable object can be a node. This means strings and tuples, but not lists and sets. To get started, we'll use a synthetic social network, during which we will attempt to answer the following basic questions using the networkx API: How many people are present in the network? What is the distribution of attributes of the people in this network? How many relationships are represented in the network? What is the distribution of the number of friends that each person has?
G = nx.read_gpickle('Synthetic Social Network.pkl') # .nodes() gives you what nodes (a list) are represented in the network # here we access the number of nodes print(len(G.nodes())) # or equivalently print(len(G)) # Who is connected to who in the network? # the edges are represented as a list of tuples, # where each tuple represent the node that form the edges # print out the first four to conserve space G.edges()[:4]
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Concept A network, more technically known as a graph, is comprised of: a set of nodes joined by a set of edges They can be represented as two lists: A node list: a list of 2-tuples where the first element of each tuple is the representation of the node, and the second element is a dictionary of metadata associated with the node. An edge list: a list of 3-tuples where the first two elements are the nodes that are connected together, and the third element is a dictionary of metadata associated with the edge. Since this is a social network of people, there'll be attributes for each individual, such as age, and sex. We can grab that data off from the attributes that are stored with each node by adding the data = True argument. Let's get a list of nodes with their attributes.
# networkx will return a list of tuples in the form (node_id, attribute_dictionary) print(G.nodes(data = True)[:5]) # excercise: Count how many males and females are represented in the graph from collections import Counter sex = [d['sex'] for _, d in G.nodes(data = True)] Counter(sex)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Edges can also store attributes in their attribute dictionary. Here the attribute is a datetime object representing the datetime in which the edges were created.
G.edges(data = True)[:4] # excercise: figure out the range of dates during which these relationships were forged? # Specifically, compute the earliest and last date dates = [d['date'] for _, _, d in G.edges(data = True)] print(min(dates)) print(max(dates))
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Exercise We found out that there are two individuals that we left out of the network, individual no. 31 and 32. They are one male (31) and one female (32), their ages are 22 and 24 respectively, they knew each other on 2010-01-09, and together, they both knew individual 7, on 2009-12-11. Use the functions G.add_node() and G.add_edge() to add this data into the network.
G.add_node(31, age = 22, sex = 'Male') G.add_node(32, age = 24, sex = 'Female') G.add_edge(31, 32, date = datetime(2010, 1, 9)) G.add_edge(31, 7, date = datetime(2009, 12, 11)) G.add_edge(32, 7, date = datetime(2009, 12, 11)) def test_graph_integrity(G): """verify that the implementation above is correct""" assert 31 in G.nodes() assert 32 in G.nodes() assert G.has_edge(31, 32) assert G.has_edge(31, 7) assert G.has_edge(32, 7) print('All tests passed.') test_graph_integrity(G)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Note that networkx will override the old data if you added duplicated ones. e.g. we start out with G.add_node(31, age = 22, sex = 'Male'), if we had another call G.add_node(31, age = 25, sex = 'Male'), then the age for node 31 will be 25. Coding Patterns These are some recommended coding patterns when doing network analysis using networkx. Iterating using List Comprehensions: python [d['attr'] for n, d in G.nodes(data = True)] And if the node is unimportant, we can use _ to indicate that that field will be discarded: python [d['attr'] for _, d in G.nodes(data = True)] A similar pattern can be used for edges: python [n1, n2 for n1, n2, _ in G.edges(data = True)] [d for _, _, d in G.edges(data = True)] If the graph we are constructing is a directed graph, with a "source" and "sink" available, then the following pattern is recommended: python [(sc, sk) for sc, sk, d in G.edges(data = True)] Visualizing Network we can draw graphs using the nx.draw() function. The most popular format for drawing graphs is the node-link diagram. If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels = True argument.
plt.rcParams['figure.figsize'] = 8, 6 nx.draw(G, with_labels = True)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Another way is to use a matrix to represent them. This is done by using the nx.to_numpy_matrix(G) function. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. We then use matplotlib's pcolor(numpy_array) function to plot. Because pcolor cannot take in numpy matrices, we will cast the matrix as an array of arrays, and then get pcolor to plot it.
matrix = nx.to_numpy_matrix(G) plt.pcolor(np.array(matrix)) plt.axes().set_aspect('equal') # set aspect ratio equal to get a square visualization plt.xlim(min(G.nodes()), max(G.nodes())) # set x and y limits to the number of nodes present. plt.ylim(min(G.nodes()), max(G.nodes())) plt.title('Adjacency Matrix') plt.show()
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Hubs How do we evaluate the importance of some individuals in a network? Within a social network, there will be certain individuals which perform certain important functions. For example, there may be hyper-connected individuals who are connected to many, many more people. They would be of use in the spreading of information. Alternatively, if this were a disease contact network, identifying them would be useful in stopping the spread of diseases. How would one identify these people? Approach 1: Neighbors One way we could compute this is to find out the number of people an individual is connected to. networkx let's us do this by giving us a G.neighbors(node) function.
# re-load the pickled data without the new individuals added in the introduction G = nx.read_gpickle('Synthetic Social Network.pkl') # the number of neighbors that individual #19 has len(G.neighbors(19)) # create a ranked list of the importance of each individual, # based on the number of neighbors they have? node_neighbors = [(n, G.neighbors(n)) for n in G.nodes()] sorted(node_neighbors, key = lambda x: len(x[1]), reverse = True)[:4]
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Approach 2: Degree Centrality The number of other nodes that one node is connected to is a measure of its centrality. networkx implements a degree centrality, which is defined as the number of neighbors that a node has normalized to the number of individuals it could be connected to in the entire graph. This is accessed by using nx.degree_centrality(G), which returns a dictionary (node is key, measure is value).
print(nx.degree_centrality(G)[19]) # confirm by manual calculating # remember to -1 to exclude itself to exclude self-loops, # note that in some places it make senses to have self-loops ( e.g. bike routes ) print(len(G.neighbors(19)) / (len( G.nodes() ) - 1))
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Degree centrality and the number of neighbors is strongly related as they are both measuring whether a given node is a hub or not. By identifying the hub (e.g. linkedin influencer, the source that's spreading the disease) we can take actions on it to create value or prevent catastrophes.
# exercise: create a histogram of the distribution of degree centralities centrality = list(nx.degree_centrality(G).values()) plt.hist(centrality) plt.title('degree centralities') plt.show() # excercise: create a histogram of the distribution of number of neighbors neighbor = [len(G.neighbors(n)) for n in G] plt.hist(neighbor) plt.title('number of neighbors') plt.show() plt.scatter(neighbor, centrality) plt.show()
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Paths in a Network Graph traversal is akin to walking along the graph, node by node, restricted by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure (e.g. connectivity, retrieving the exact relationships) of certain portions of the graph and for finding paths that connect two nodes in the network. Using the synthetic social network, we will figure out how do we find the shortest path to get from individual A to individual B? One approach is what one would call a breadth-first search. It can be used on both directed and undirected graphs, but the graph's edges has to be unweighted. The approach starts at a source node and explores the immediate neighbor nodes first before moving to the next level neighbors. In greater detail: Begin with a queue of the starting node. Add the neighbors of that node to the queue. If destination node is present in the queue, end. If destination node is not present, proceed. For each node in the queue: Remove node from the queue. Add neighbors of the node to the queue. Check if destination node is present or not. If destination node is present, end. If destination node is not present, continue. Try implementing this algorithm in a function. The function should take in two nodes, node1 and node2, and the graph G that they belong to, and return a Boolean that indicates whether a path exists between those two nodes or not.
from collections import deque def path_exists(G, source, target): """checks whether a path exists between two nodes (node1, node2) in graph G""" if not G.has_node(source): raise ValueError('Source node {} not in graph'.format(source)) if not G.has_node(target): raise ValueError('Target node {} not in graph'.format(target)) path_exist = False visited_node = set() queue = deque([source]) while queue: node = queue.popleft() for neighbor in G.neighbors(node): if neighbor not in visited_node: if neighbor == target: path_exist = True break else: visited_node.add(node) queue.append(neighbor) if path_exist: break return path_exist # 18 and any other node (should return False) # 29 and 26 (should return True) print(path_exists(G = G, source = 18, target = 5)) print(path_exists(G = G, source = 29, target = 26))
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Meanwhile... thankfully, networkx has a function for us to use, titled has_path, so we don't have to implement this on our own. :-)
nx.has_path(G = G, source = 29, target = 26)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
networkx also has other shortest path algorithms implemented. e.g. nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes. We can build upon these to build our own graph query functions. Let's see if we can trace the shortest path from one node to another. Hint: You may want to use G.subgraph(iterable_of_nodes) to extract just the nodes and edges of interest from the graph G
nx.shortest_path(G, 4, 14) def extract_path_edges(G, source, target): new_G = None if nx.has_path(G, source, target): nodes_of_interest = nx.shortest_path(G, source, target) new_G = G.subgraph(nodes_of_interest) return new_G source = 4 target = 14 new_G = extract_path_edges(G, source, target) nx.draw(new_G, with_labels = True)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Hubs Revisited It looks like individual 19 is an important person of some sorts - if a message has to be passed through the network in the shortest time possible, then usually it'll go through person 19. Such a person has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms. Note that degree centrality and betweenness centrality don't necessarily correlate. Betweenness centrality of a node $v$ is the sum of the fraction of all-pairs shortest paths that pass through $v$: $$ c_B(v) = \sum_{s,t \in V} \frac{\sigma(s, t \vert v)}{\sigma(s, t)} $$ Where: $V$ denotes the set of nodes $\sigma(s, t|v)$ denotes the number of shortest paths between $s$ and $t$ that contain vertex $v$ $\sigma(s, t)$ denotes the number of shortest paths between $s$ and $t$
nx.betweenness_centrality(G, normalized = False)[19]
networkx/networkx.ipynb
ethen8181/machine-learning
mit
The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square. You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other variables, closing triangles is one of the core ideas behind the system. A knows B and B knows C, then A probably knows C as well. Cliques In a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not. The core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself.
# reload the network G = nx.read_gpickle('Synthetic Social Network.pkl') nx.draw(G, with_labels = True)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Exercise Write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with. Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship.
def get_triangles(G, node): # store all the data points that are in a triangle # include the targeted node to draw sub-graph later triangles = set([node]) neighbors1 = set(G.neighbors(node)) for n in neighbors1: # if the target node is in a triangle relationship, then # the target node's neighbor's neighbor # should intersect with the target node's neighbor neighbors2 = set(G.neighbors(n)) triangle = neighbors1.intersection(neighbors2) # if the intersection exists, add the point (the first neighbor) and # the set (second neighbor) if triangle: triangles.update(triangle) triangles.add(n) return triangles print(get_triangles(G = G, node = 3)) # drawing out the subgraph composed of those nodes to verify nx.draw(G.subgraph(get_triangles(G = G, node = 3)), with_labels = True)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Friend Recommendation: Open Triangles Let's see if we can do some friend recommendations by looking for open triangles. Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph.
def get_open_triangles(G, node): # the target node's neighbor's neighbor's neighbor's should # not include the target node open_triangles = [] neighbors1 = set(G.neighbors(node)) for node1 in neighbors1: # remove the target node from the target node's neighbor's # neighbor's, since it will certainly go back to itself neighbors2 = set(G.neighbors(node1)) neighbors2.discard(node) for node2 in neighbors2: neighbors3 = set(G.neighbors(node2)) if node not in neighbors3: open_triangle = set([node]) open_triangle.update([node1, node2]) open_triangles.append(open_triangle) return open_triangles open_triangles = get_open_triangles(G = G, node = 3) open_triangles # draw out each of the triplets. nodes = get_open_triangles(G = G, node = 20) for i, triplet in enumerate(nodes): fig = plt.figure(i) nx.draw(G.subgraph(triplet), with_labels = True)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
Tables to Networks, Networks to Tables Networks can be represented in a tabular form in two ways: As an adjacency list with edge attributes stored as columnar values, and as a node list with node attributes stored as columnar values. Storing the network data as a single massive adjacency table, with node attributes repeated on each row, can get unwieldy, especially if the graph is large, or grows to be so. One way to get around this is to store two files: one with node data and node attributes, and one with edge data and edge attributes. The Divvy bike sharing dataset is one such example of a network data set that has been stored as such. The data set is comprised of the following data: Stations and metadata (like a node list with attributes saved) Trips and metadata (like an edge list with attributes saved) Download the file from dropbox. The README.txt file in the Divvy directory should help orient you around the data.
stations = pd.read_csv( 'divvy_2013/Divvy_Stations_2013.csv', parse_dates = ['online date'], index_col = 'id', encoding = 'utf-8' ) # the id represents the node stations.head() trips = pd.read_csv( 'divvy_2013/Divvy_Trips_2013.csv', parse_dates = ['starttime', 'stoptime'], index_col = ['trip_id'] ) # the from_station_id and to_station_id represents # the two nodes that the edge connects trips.head()
networkx/networkx.ipynb
ethen8181/machine-learning
mit
At this point, we have our stations and trips data loaded into memory. How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important. Let's try to answer the question: "What are the most popular trip paths?" In this case, the bike station is a reasonable "unit of consideration", so we will use the bike stations as the nodes. To start, we'll initialize an directed graph G and add in the nodes and edges.
# call the pandas DataFrame row-by-row iterator, which # iterates through the index, and columns G = nx.DiGraph() for n, d in stations.iterrows(): G.add_node(n, attr_dict = d.to_dict()) # use groupby to retrieve the pair of nodes and the data count for (start, stop), d in trips.groupby(['from_station_id', 'to_station_id']): G.add_edge(start, stop, count = len(d)) # notice that there're self-loops print(G.edges(data = True)[:4]) print() # examine the density (the proportion of nodes that are connected) num_edges = len(G.edges()) num_possible_edges = len(G.nodes()) ** 2 density = num_edges / num_possible_edges print('density:', density)
networkx/networkx.ipynb
ethen8181/machine-learning
mit
19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Piecewise Parabolic method")
notebooks/noaa-gfdl/cmip6/models/gfdl-esm2m/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Western boundary enhanced background plus weak laplacian")
notebooks/noaa-gfdl/cmip6/models/gfdl-esm2m/ocean.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0