markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Validation Accuracy: 95.92%
Congratulations
You've built a feedforward neural network in Keras!
Don't stop here! Next, you'll add a convolutional layer to drive.py.
Convolutions
Build a new network, similar to your existing network. Before the hidden layer, add a 3x3 convolutional layer with 32 filters and valid paddin... | from keras.layers import Convolution2D, MaxPooling2D, Dropout, Flatten
# TODO: Re-construct the network and add a convolutional layer before the first fully-connected layer.
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Flat... | Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb | thomasantony/CarND-Projects | mit |
Validation Accuracy: 96.98%
Pooling
Re-construct your network and add a 2x2 pooling layer immediately following your convolutional layer.
Then compile and train the network. | # TODO: Re-construct the network and add a pooling layer after the convolutional layer.
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128, input_shape=(flat_... | Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb | thomasantony/CarND-Projects | mit |
Validation Accuracy: 97.36%
Dropout
Re-construct your network and add dropout after the pooling layer. Set the dropout rate to 50%. | # TODO: Re-construct the network and add dropout after the pooling layer.
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128, input_sh... | Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb | thomasantony/CarND-Projects | mit |
Validation Accuracy: 97.75%
Optimization
Congratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.
Have fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more tra... | pool_size = (2,2)
model = Sequential()
model.add(Convolution2D(16, 5, 5, border_mode='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropou... | Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb | thomasantony/CarND-Projects | mit |
Best Validation Accuracy: 99.65%
Testing
Once you've picked out your best model, it's time to test it.
Load up the test data and use the evaluate() method to see how well it does.
Hint 1: After you load your test data, don't forget to normalize the input and one-hot encode the output, so it matches the training data.
H... | # with open('./test.p', mode='rb') as f:
# test = pickle.load(f)
# X_test = test['features']
# y_test = test['labels']
# X_test = X_test.astype('float32')
# X_test /= 255
# X_test -= 0.5
# Y_test = np_utils.to_categorical(y_test, 43)
model.evaluate(X_test, Y_test)
model.save('test-acc-9716-epoch50.h5')
from... | Exercises/Term1/keras-lab/traffic-sign-classification-with-keras.ipynb | thomasantony/CarND-Projects | mit |
Python standard function dir(obj) gets all member names of an object. Lest's see what are in the FORTH kernel vm: | print(dir(vm)) | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
I only want you to see that there are very few properties and methods in this FORTH kernel object and many of them are conventional FORTH tokens like code, endcode, comma, compiling, dictionary, here, last, stack, pop, push, tos, rpop, rstack, rtos, tib, ntib, tick, and words.
Now let's play
The property vm.stack is t... | vm.stack | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
vm.dictate() method is the way project-k VM receives your commands (a string). It actually is also the way we feed it an entire FORTH source code file. Everything given to vm.dictate() is like a command line you type to the FORTH system as simple as only a number: | vm.dictate("123")
vm.stack | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
The first line above dictates project-k VM to push 123 onto the data stack and the second line views the data stack. We can even cascade these two lines into one: | vm.dictate("456").stack | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
because vm.dictate() returns the vm object itself.
project-k VM knows only two words 'code' and 'end-code' at first
Let's define a FORTH command (or 'word') that prints "Hello World!!": | vm.dictate("code hi! print('Hello World!!') end-code"); # define the "hi!" comamnd where print() is a standard python function
vm.dictate("hi!"); | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
Did you know what have we done? We defined a new FORTH code word! By the way, we can use any character in a word name except white spaces. This is a FORTH convention.
Define the 'words' command to view all words
I'd like to see what are all the words we have so far. The FORTH command 'words' is what we want now but th... | vm.dictate("code words print([w.name for w in vm.words['forth'][1:]]) end-code")
vm.dictate("words"); | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
In the above definition the vm.words is a python dictionary (not FORTH dictionary) defined in the project-k VM object as a property which is something like an array of all recent words in the recent vocabulary named forth which is the only one vocabulary comes with the FORTH kernel. Where a FORTH 'vocabulary' is simply... | vm.dictate("code + push(pop(1)+pop()) end-code"); # pop two operands from FORTH data stack and push back the result
vm.dictate("code .s print(stack) end-code"); # print the FORTH data stack
vm.dictate('code s" push(nexttoken(\'"\'));nexttoken() end-code'); # get a string
vm.dictate('words'); # list all recent wo... | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
This example demonstrates how to use built-in methods push(), pop(), nexttoken() and the stack property (or global variable). As shown in above definitions, we can omit vm. so vm.push, vm.stack are simplified to push, stack because code ... end-code definitions are right in the VM name space. Now let's try these new wo... | vm.stack = [] # clear the data stack
vm.dictate(' s" Forth "') # get the string 'Forth '
vm.dictate(' s" is the easist "') # get the string 'is the easist '
vm.dictate(' s" programming langage."') # get the string 'programing language.'
vm.dictate('.s'); # view the data stack
print(vm.dictate('+').stack) # conca... | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
The + command can certainly concatenate strings together and also can add numbers because Python's + operator works that way. Please try it with integers and floating point numbers: | print(vm.dictate('123 456 + ').pop()); # Push 123, push 456, add them
print(vm.dictate('1.23 45.6 + ').pop()); | notebooks/tutor.ipynb | hcchengithub/project-k | mit |
Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittest... | import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
shape = (None, image_shape[0], image_shape[1], image_shape[2])
return tf.pl... | image-classification/dlnd_image_classification.ipynb | javoweb/deep-learning | mit |
Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor... | def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple fo... | image-classification/dlnd_image_classification.ipynb | javoweb/deep-learning | mit |
Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a ch... | def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
f_size = x_tensor.get_shape()[1]... | image-classification/dlnd_image_classification.ipynb | javoweb/deep-learning | mit |
Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packag... | def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_out... | image-classification/dlnd_image_classification.ipynb | javoweb/deep-learning | mit |
Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Act... | def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""... | image-classification/dlnd_image_classification.ipynb | javoweb/deep-learning | mit |
1.4 Visulze Features | data.describe()
data.describe(include = ['object'])
ind = data['rating_diff'].argmax()
print data.iloc[ind].movie_title
print data.iloc[ind].scaled_imdb
print data.iloc[ind].scaled_douban
print data.iloc[ind].title_year
print data.iloc[ind].movie_imdb_link
print data.iloc[ind].d_year
print data.iloc[ind].douban_score... | Movie_Rating/.ipynb_checkpoints/Culture_difference_movie_rating-checkpoint.ipynb | sadahanu/DataScience_SideProject | mit |
Now analyze the model performance: | store.display_tfma_analysis(<insert model ID here>, slicing_column='trip_start_hour') | tfx/examples/airflow_workshop/notebooks/step6.ipynb | tensorflow/tfx | apache-2.0 |
Now plot the artifact lineage: | # Try different IDs here. Click stop in the plot when changing IDs.
%matplotlib notebook
store.plot_artifact_lineage(<insert model ID here>) | tfx/examples/airflow_workshop/notebooks/step6.ipynb | tensorflow/tfx | apache-2.0 |
Create a sample 2D Image
Gaussians are placed on a grid with some random small offsets
the variable coords are the known positions
these will not be known in a real experiment | # Create coordinates with a random offset
coords = peakFind.lattice2D_2((1, 0), (0, 1), 2, 2, (0, 0), (5, 5))
coords += np.random.rand(coords.shape[0], coords.shape[1]) / 2.5
coords = np.array(coords)*30 + (100, 100)
print('Coords shape = {}'.format(coords.shape))
# Create an image with the coordinates as gaussians
ke... | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Find the center pixel of each peak
uses ncempy.algo.peakFind.peakFind2D()
These will be integer values of the max peak positions.
Gaussian fitting will be used to find the smal random offsets
See end of notebook for an explanation as to how this works. | coords_found = peakFind.peakFind2D(simIm, 0.5)
fg, ax = plt.subplots(1,1)
ax.imshow(simIm)
_ = ax.scatter(coords_found[:,1],coords_found[:,0],c='r',marker='x') | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Use Gaussian fitting for sub-pixel fitting
Each peak is fit to a 2D Gaussian function
The average of the sigma values is printed | optPeaks, optI, fittingValues = peakFind.fit_peaks_gauss2D(simIm, coords_found, 5,
(1.5, 2.5), ((-1.5, -1.5,0,0),(1.5,1.5,3,3)))
# Plot the gaussian widths
f2, ax2 = plt.subplots(1, 2)
ax2[0].plot(optPeaks[:, 2],'go')
ax2[0].plot(optPeaks[:, 3],'ro')
ax2[0].s... | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Plot to compare the known and fitted coordinates
coords are the expected positions we used to generate the image
coords_found are the peaks found with full pixel precision
optPeaks are the optimized peak positions using Gaussian fitting
Zoom in to peaks to see how well the fit worked | fg, ax = plt.subplots(1,1)
ax.imshow(simIm)
ax.scatter(coords_found[:,1], coords_found[:,0],c='b',marker='o')
ax.scatter(optPeaks[:,1], optPeaks[:,0],c='r',marker='x')
ax.scatter(coords[:,1], coords[:,0],c='k',marker='+')
_ = ax.legend(['integer', 'optimized', 'expected']) | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Find the error in the fitting
Gausssian fitting can be heavily influenced by the tails
Some error is expected. | # Plot the RMS error for each fitted peak
# First sort each set of coordinates to match them
err = []
for a, b in zip(coords[np.argsort(coords[:,0]),:], optPeaks[np.argsort(optPeaks[:,0]),0:2]):
err.append(np.sqrt(np.sum(a - b)**2))
fg, ax = plt.subplots(1, 1)
ax.plot(err)
_ = ax.set(xlabel='coorindate', ylabe... | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
How does peakFind2D work with the Roll?
A very confusing point is the indexing used in meshgrid
If you use indexing='ij' then the peak position needs to be plotted in matplotlib backwards (row,col)
If you change the meshgrid indexing='xy' then this issue is less confusing BUT....
Default indexing used to be 'ij' when ... | # Copy doubleRoll from ncempy.algo.peakFind
# to look at the algorithm
def doubleRoll(image,vec):
return np.roll(np.roll(image, vec[0], axis=0), vec[1], axis=1) | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Create a single 2D Gaussian peak | known_peak = [6, 5]
YY, XX = np.meshgrid(range(0,12),range(0,12),indexing='ij')
gg = gaussND.gauss2D(XX,YY,known_peak[1], known_peak[0],1,1)
gg = np.round(gg,decimals=3)
plt.figure()
plt.imshow(gg) | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Roll the array 1 pixel in each direction
Compare the original and the rolled version
The peak will be moved by 1 pixel in each direction in each case
Here I ignore the next nearest neighbors (-1,-1) for simplicity. (peakFind.doubleRoll2D does not ignore these).
The peak will always be larger than the element-by-elemen... | # Compare only nearest neighbors
roll01 = gg > doubleRoll(gg, [0, 1])
roll10 = gg > doubleRoll(gg, [1, 0])
roll11 = gg > doubleRoll(gg, [1, 1])
roll_1_1 = gg > doubleRoll(gg, [-1, -1])
fg,ax = plt.subplots(2,2)
ax[0,0].imshow(roll01)
ax[0,1].imshow(roll10)
ax[1,0].imshow(roll11)
ax[1,1].imshow(roll_1_1)
for aa in ax.... | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Compare each rolled image
use logical and to find the pixel which was highest in every comparison
The local peak will be the only one left | final = roll01 & roll10 & roll11 & roll_1_1
fg,ax = plt.subplots(1,1)
ax.imshow(final)
ax.scatter(known_peak[1],known_peak[0])
| ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Find the peak using where
We have a bool array above.
np.where will return the elements of the True values which correspond to the peak position(s) | peak_position = np.array(np.where(final))
print(peak_position) | ncempy/notebooks/example_peakFind.ipynb | ercius/openNCEM | gpl-3.0 |
Mouse B cell
We load the hic_data object from the BAM file | reso = 100000
cel1 = 'mouse_B'
cel2 = 'mouse_PSC'
rep1 = 'rep1'
rep2 = 'rep2'
hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1),
resolution=reso,
biases=bias_path.format(cel1, rep1, reso // 1000),
ncpu... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
We compare the interactions of the two Hi-C matrices at a given distance.
The Spearman rank correlation of the matrix diagonals
In the plot we represent the Spearman rank correlation of the diagonals of the matrices starting from the main diagonal until the diagonal at 10Mbp. | ## this part is to "tune" the plot ##
plt.figure(figsize=(9, 6))
axe = plt.subplot()
axe.grid()
axe.set_xticks(range(0, 55, 5))
axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45)
#####################################
spearmans, dists, scc, std = correlate_matrices(hic_dat... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
The SCC score as in HiCrep (see https://doi.org/10.1101/gr.220640.117) is also computed. The value of SCC ranges from −1 to 1 and can be interpreted in a way similar to the standard correlation | print('SCC score: %.4f (+- %.7f)' % (scc, std))
reso = 1000000
hic_data1 = hic_data2 = None
hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1),
resolution=reso,
biases=bias_path.format(cel1, rep1, reso // 1000),
... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
The correlation of the eigenvectors
Since the eigenvectors of a matrix capture its internal correlations [26], two matrices with highly correlation of eigenvectors are considered to have similar structure.
In this case we limit the computation to the first 6 eigenvectors | corrs = eig_correlate_matrices(hic_data1, hic_data2, show=True, aspect='auto', normalized=True)
for cor in corrs:
print(' '.join(['%5.3f' % (c) for c in cor]) + '\n') | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
The reproducibility score (Q)
Computed as in HiC-spector (https://doi.org/10.1093/bioinformatics/btx152), it is also based on comparing eigenvectors. The reproducibility score ranges from 0 (low similarity) to 1 (identity). | reprod = get_reproducibility(hic_data1, hic_data2, num_evec=20, normalized=True, verbose=False)
print('Reproducibility score: %.4f' % (reprod)) | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
Mouse iPS cell
We load the hic_data object from the BAM file | reso = 100000
hic_data1 = hic_data2 = None
hic_data1 = load_hic_data_from_bam(base_path.format(cel2, rep1),
resolution=reso,
biases=bias_path.format(cel2, rep1, reso // 1000),
ncpus=8)
hic_data2 = load_hic_data_from_b... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
We compare the interactions of the two Hi-C matrices at a given distance.
The Spearman rank correlation of the matrix diagonals
In the plot we represent the Spearman rank correlation of the diagonals of the matrices starting from the main diagonal until the diagonal at 10Mbp. | ## this part is to "tune" the plot ##
plt.figure(figsize=(9, 6))
axe = plt.subplot()
axe.grid()
axe.set_xticks(range(0, 55, 5))
axe.set_xticklabels(['%d Mb' % int(i * 0.2) if i else '' for i in range(0, 55, 5)], rotation=-45)
#####################################
spearmans, dists, scc, std = correlate_matrices(hic_dat... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
The SCC score as in HiCrep (see https://doi.org/10.1101/gr.220640.117) is also computed. The value of SCC ranges from −1 to 1 and can be interpreted in a way similar to the standard correlation | print('SCC score: %.4f (+- %.7f)' % (scc, std))
reso = 1000000
hic_data1 = hic_data2 = None
hic_data1 = load_hic_data_from_bam(base_path.format(cel2, rep1),
resolution=reso,
biases=bias_path.format(cel2, rep1, reso // 1000),
... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
Comparison between cell types
Replicate 1 | reso = 100000
hic_data1 = hic_data2 = None
hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1),
resolution=reso,
biases=bias_path.format(cel1, rep1, reso // 1000),
ncpus=8)
hic_data2 = load_hic_data_from_ba... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
We expect a lower SCC score between different cell types | print('SCC score: %.4f (+- %.7f)' % (scc, std))
reso = 1000000
hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep1),
resolution=reso,
biases=bias_path.format(cel1, rep1, reso // 1000),
ncpus=8)
hic_data2 = ... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
Replicate 2 | reso = 100000
hic_data1 = hic_data2 = None
hic_data1 = load_hic_data_from_bam(base_path.format(cel1, rep2),
resolution=reso,
biases=bias_path.format(cel1, rep2, reso // 1000),
ncpus=8)
hic_data2 = load_hic_data_from_ba... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
Merge Hi-C experiments
Once agreed that experiments are similar, they can be merged.
Here is a simple way to merge valid pairs. Arguably we may want to merge unfiltered data but the difference would be minimal specially with non-replicates. | from pytadbit.mapping import merge_bams
! mkdir -p results/fragment/mouse_B_both/
! mkdir -p results/fragment/mouse_PSC_both/
! mkdir -p results/fragment/mouse_B_both/03_filtering/
! mkdir -p results/fragment/mouse_PSC_both/03_filtering/
cell = 'mouse_B'
rep1 = 'rep1'
rep2 = 'rep2'
hic_data1 = 'results/fragment/{0}_... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
Normalizing merged data | from pytadbit.mapping.analyze import hic_map
! mkdir -p results/fragment/mouse_B_both/04_normalizing
! mkdir -p results/fragment/mouse_PSC_both/04_normalizing | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
All in one loop to:
- filter
- normalize
- generate intra-chromosome and genomic matrices
All datasets are analysed at various resolutions. | for cell in ['mouse_B','mouse_PSC']:
print(' -', cell)
for reso in [1000000, 200000, 100000]:
print(' *', reso)
# load hic_data
hic_data = load_hic_data_from_bam(
'results/fragment/{0}_both/03_filtering/valid_reads12_{0}.bam'.format(cell),
reso)
# filter... | doc/source/nbpictures/tutorial_9-Compare_and_merge_Hi-C_experiments.ipynb | 3DGenomes/tadbit | gpl-3.0 |
We begin by importing the usual libraries, setting up a very simple dataloader, and generating a toy dataset of spirals. | def dataloader(arrays, batch_size, *, key):
dataset_size = arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in arrays)
indices = jnp.arange(dataset_size)
while True:
perm = jrandom.permutation(key, indices)
(key,) = jrandom.split(key, 1)
start = 0
en... | examples/train_rnn.ipynb | patrick-kidger/equinox | apache-2.0 |
Now for our model.
Purely by way of example, we handle the final adding on of bias ourselves, rather than letting the linear layer do it. This is just so we can demonstrate how to use custom parameters in models. | class RNN(eqx.Module):
hidden_size: int
cell: eqx.Module
linear: eqx.nn.Linear
bias: jnp.ndarray
def __init__(self, in_size, out_size, hidden_size, *, key):
ckey, lkey = jrandom.split(key)
self.hidden_size = hidden_size
self.cell = eqx.nn.GRUCell(in_size, hidden_size, key=ck... | examples/train_rnn.ipynb | patrick-kidger/equinox | apache-2.0 |
And finally the training loop. | def main(
dataset_size=10000,
batch_size=32,
learning_rate=3e-3,
steps=200,
hidden_size=16,
depth=1,
seed=5678,
):
data_key, loader_key, model_key = jrandom.split(jrandom.PRNGKey(seed), 3)
xs, ys = get_data(dataset_size, key=data_key)
iter_data = dataloader((xs, ys), batch_size, ... | examples/train_rnn.ipynb | patrick-kidger/equinox | apache-2.0 |
eqx.filter_value_and_grad will calculate the gradient with respect to the first argument (model). By default it will calculate gradients for all the floating-point JAX arrays and ignore everything else. For example the model parameters will be differentiated, whilst model.hidden_size is an integer and will be left alon... | main() # All right, let's run the code. | examples/train_rnn.ipynb | patrick-kidger/equinox | apache-2.0 |
Loading Model Results
First, we need to find the list of all directories in our model output folder from the 001-storing-model-results notebook. We can do this using the glob and os modules, which will allow us to work with directories and list their contents. | import os
# Using os.listdir to show the current directory
os.listdir("./")
# Using os.listdir to show the output directory
os.listdir("output")[0:5]
import glob
# Using glob to list the output directory
glob.glob("output/run-*")[0:5] | notebooks/basic-stats/002-reading-model-results.ipynb | mjbommar/cscs-530-w2016 | bsd-2-clause |
Using os.path.join and os.path.basename
We can also create paths and navigate directory trees using os.path.join. This method helps build file and directory paths, like we see below. | run_directory = os.listdir("output")[0]
print(run_directory)
print(os.path.join(run_directory,
"parameters.csv"))
print(run_directory)
print(os.path.basename(run_directory)) | notebooks/basic-stats/002-reading-model-results.ipynb | mjbommar/cscs-530-w2016 | bsd-2-clause |
Iterating through model run directories
Next, once we are able to "find" all model run directories, we need to iterate through them and read all data from them. In the cells, we create data frames for each CSV output file from out 001-storing-model-results notebook. | # Create "complete" data frames
run_data = []
all_timeseries_data = pandas.DataFrame()
all_interaction_data = pandas.DataFrame()
# Iterate over all directories
for run_directory in glob.glob("output/run*"):
# Get the run ID from our directory name
run_id = os.path.basename(run_directory)
# Load param... | notebooks/basic-stats/002-reading-model-results.ipynb | mjbommar/cscs-530-w2016 | bsd-2-clause |
This is a pandas DataFrame object. It has a lot of great properties that are beyond the scope of our tutorials. | forecast_data['temp_air'].plot(); | docs/tutorials/forecast_to_power.ipynb | cwhanse/pvlib-python | bsd-3-clause |
Plot the GHI data. Most pvlib forecast models derive this data from the weather models' cloud clover data. | ghi = forecast_data['ghi']
ghi.plot()
plt.ylabel('Irradiance ($W/m^{-2}$)'); | docs/tutorials/forecast_to_power.ipynb | cwhanse/pvlib-python | bsd-3-clause |
Note that AOI has values greater than 90 deg. This is ok.
POA total
Calculate POA irradiance | poa_irrad = irradiance.poa_components(aoi, forecast_data['dni'], poa_sky_diffuse, poa_ground_diffuse)
poa_irrad.plot()
plt.ylabel('Irradiance ($W/m^{-2}$)')
plt.title('POA Irradiance'); | docs/tutorials/forecast_to_power.ipynb | cwhanse/pvlib-python | bsd-3-clause |
Cell temperature
Calculate pv cell temperature | ambient_temperature = forecast_data['temp_air']
wnd_spd = forecast_data['wind_speed']
thermal_params = temperature.TEMPERATURE_MODEL_PARAMETERS['sapm']['open_rack_glass_polymer']
pvtemp = temperature.sapm_cell(poa_irrad['poa_global'], ambient_temperature, wnd_spd, **thermal_params)
pvtemp.plot()
plt.ylabel('Temperatur... | docs/tutorials/forecast_to_power.ipynb | cwhanse/pvlib-python | bsd-3-clause |
Run the SAPM using the parameters we calculated above. | effective_irradiance = pvsystem.sapm_effective_irradiance(poa_irrad.poa_direct, poa_irrad.poa_diffuse,
airmass, aoi, sandia_module)
sapm_out = pvsystem.sapm(effective_irradiance, pvtemp, sandia_module)
#print(sapm_out.head())
sapm_out[['p_mp']].plot()
plt.yla... | docs/tutorials/forecast_to_power.ipynb | cwhanse/pvlib-python | bsd-3-clause |
Choose a particular inverter | sapm_inverter = sapm_inverters['ABB__MICRO_0_25_I_OUTD_US_208__208V_']
sapm_inverter
p_ac = inverter.sandia(sapm_out.v_mp, sapm_out.p_mp, sapm_inverter)
p_ac.plot()
plt.ylabel('AC Power (W)')
plt.ylim(0, None); | docs/tutorials/forecast_to_power.ipynb | cwhanse/pvlib-python | bsd-3-clause |
Plot just a few days. | p_ac[start:start+pd.Timedelta(days=2)].plot(); | docs/tutorials/forecast_to_power.ipynb | cwhanse/pvlib-python | bsd-3-clause |
ToDo
Your Network Summary
Network source and preprocessing
Node/Edge attributes
Size, Order
Gorgeous network layout. Try to show that your network has some structure, play with node sizes and colors, scaling parameters, tools like Gephi may be useful here
Degree distribution, Diameter, Clustering Coefficient
Struc... | G = nx.read_gml( path =
"./data/ha5/huge_100004196072232_2015_03_24_11_20_1d58b0ecdf7713656ebbf1a177e81fab.gml", relabel = False ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
The order of a network $G=(V,E)$ is $|V|$ and the size is $|E|$. | print "The network G is of the order %d. Its size is %d." % ( G.number_of_nodes( ), G.number_of_edges( ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Visualisation
It is always good to have a nice and attractive picture in a study. | deg = G.degree( )
fig = plt.figure( figsize = (12,8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black' )
nx.draw_networkx( G, with_labels = False, ax = axs,
cmap = plt.cm.Purples, node_color = deg.values( ), edge_color = "magenta",
nodelist = deg.keys( ), node_size = [ 100 * np.log( d + 1 ) for d in deg.values... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Let's have a look at connected components, since the plot suggests, that the graph is not connected. | CC = sorted( nx.connected_components( G ), key = len, reverse = True )
for i, c in enumerate( CC, 1 ):
row = ", ".join( [ G.node[ n ][ 'label' ] for n in c] )
print "%#2d (%d)\t"%(i, len(c)), ( row )[:100].strip() + (" ..." if len( row ) > 100 else "" ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
The largest community connected component represents family, my acquaintances at shool ($\leq 2003$) and in university ($2003-2009$) and the second largest component are people I met at Oxford Royale Summer School in 2012. The one-node are either old acquaintances, select colleagues from work, instructors et c.
Since... | H = G.subgraph( CC[ 0 ] )
print "The largest component is of the order %d. Its size is %d." % ( H.number_of_nodes( ), H.number_of_edges( ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Let's plot the subgraph and study the its degree distribution. | deg = H.degree( )
fig = plt.figure( figsize = (16, 6) )
axs = fig.add_subplot( 1,2,1, axisbg = 'black', title = "Master cluster", )
pos = nx.fruchterman_reingold_layout( H )
nx.draw_networkx( H, with_labels = False, ax = axs,
cmap = plt.cm.Oranges, node_color = deg.values( ), edge_color = "cyan",
nodelist = d... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Degree distribution
A useful tool for exploring the tail behaviour of sample is the Mean Excess plot, defined as the
$$M(u) = \mathbb{E}\Big(\Big. X-u\,\big.\big\rvert\,X\geq u \Big.\Big)$$
of which the emprical counterpart is
$$\hat{M}(u) = {\Big(\sum_{i=1}^n 1_{x_i\geq u}\Big)^{-1}}\sum_{i=1}^n (x_i-u) 1_{x_i\geq u... | from scipy.stats import rankdata
def mean_excess( data ) :
data = np.array( sorted( data, reverse = True ) )
ranks = rankdata( data, method = 'max' )
excesses = np.array( np.unique( len( data ) - ranks ), dtype = np.int )
thresholds = data[ excesses ]
mean_excess = np.cumsum( data )[ excesses ] / ( ... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
The Mean Excess plot does seems to indicate that the node degree does not follow a scale free distribution. Indeed, the plot levels off as ita approaches the value $50$. The rightmost spike is in the region where the variance of the estimate of the conditional expectation is extremely high, which is why this artefact o... | print "This subgraph's clustering coefficient is %.3f." % nx.average_clustering( H )
print "This subgraph's average shortest path length is %.3f." % nx.average_shortest_path_length( H )
print "The radius (maximal distance) is %d." % nx.radius( H ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
The clustering coefficient is moderately high and any two members in this component are 2 hops away from each other on average. This means that this subgraph has a tightly knit cluster structure, almost a like small world, were it not for the light-tailed degreee distribution.
Structural analysis
Centrality measures
D... | pr = nx.pagerank_numpy( H, alpha = 0.85 )
cb = nx.centrality.betweenness_centrality( H )
cc = nx.centrality.closeness_centrality( H )
cd = nx.centrality.degree_centrality( H ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
the Mixing coefficient
The mixing coefficient for a numerical node attribute $X = \big(x_i\big)$ in an undirected graph $G$, with the adjacency matrix $A$, is defined as
$$\rho(x) = \frac{\text{cov}}{\text{var}} = \frac{\sum_{ij}A_{ij}(x_i-\bar{x})(x_j-\bar{x})}{\sum_{ij}A_{ij}(x_i-\bar{x})^2} $$
where $\bar{x} = \frac... | def assortativity( G, X ) :
## represent the graph in an adjacency matrix form
A = nx.to_numpy_matrix( G, dtype = np.float, nodelist = G.nodes( ) )
## Convert x -- dictionary to a numpy vector
x = np.array( [ X[ n ] for n in G.nodes( ) ] , dtype = np.float )
## Compute the x'Ax part
xAx = np.dot( x, np.arra... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Let's compute the assortativity for the centralities, pagerank vector, vertex degrees and node attributes. | print "PageRank assortativity coefficient: %.3f" % assortativity( H, nx.pagerank_numpy( H, alpha = 0.85 ) )
print "Betweenness centrality assortativity coefficient: %.3f" % assortativity( H, nx.centrality.betweenness_centrality( H ) )
print "Closenesss centrality assortativity coefficient: %.3f" % assortativity( H, nx.... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
This component does not show segregation patterns in connectivity, as the computed coefficinets do indicate that neither that "opposites", nor that "kindred spritis" attach. The noticably high values of degree centrality is probably due to the component already having a tight cluster structure.
Node Rankings
It is som... | ## Print the upper triangle of a symmetric matrix in reverse column order
def show_symmetric_matrix( A, labels, diag = False ) :
d = 0 if diag else 1
c = len( labels ) - d
print "\t", "\t".join( c * [ "%.3s" ] ) % tuple( labels[ d: ][ ::-1 ] )
for i, l in enumerate( labels if diag else labels[ :-1 ] ) :... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
It actually interesting, to compare the ordering produced by different vertex-ranking algorithms. The most direct way is to analyse pariwise Spearman's $\rho$, since it compares the rank-transformation of one vector of observed data to another. | from scipy.spatial.distance import pdist, squareform
from scipy.stats import spearmanr as rho
labels = [ 'btw', 'deg', 'cls', 'prk' ]
align = lambda dd : np.array( [ dd[ n ] for n in H.nodes( ) ], dtype = np.float )
rank_dist = squareform( pdist(
[ align( cb ), align( cd ), align( cc ), align( pr ) ],
metri... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
The rankings match each other very closely!
Commutnity detection
A $k$-clique commutniy detection method considers a set of nodes a community if its maximal clique is of order $k$, all nodes are parto of at least one $k$-clique and all $k$-cliques overlap by at least $k-1$ vertrices. | kcq = list( nx.community.k_clique_communities( H, 3 ) ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Label propagation algorithm, initially assigns unique labels to each node, and the relabels nodes in random order until stabilization.
New label corresponds to the label, which the largest number of neighbours has.
Code borrowed from lpa.py by Tyler Rush, which can be found at networkx-devel. The procedure is an implem... | import lpa
lab = lpa.semisynchronous_prec_max( H ) | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Markov Cluster Algorithm (MCL).
Input: Transition matrix $T = D^{-1}A$
Output: Adjacency matrix $M^$
1. Set $M = T$
2. repeat:
3. Expansion Step: $M = M^p$ (usually $p=2$)
4. Inflation Step: Raise every entry of $M$ to the power $\alpha$ (usualy $\alpha=2$)
5. Renormalize: Normalize each row by its sum
... | def mcl_iter( A, p = 2, alpha = 2, theta = 1e-8, rel_eps = 1e-4, niter = 10000 ) :
## Convert A into a transition kernel: M_{ij} is the probability of making a transition from i to j.
M = np.multiply( 1.0 / A.sum( axis = 1, dtype = np.float64 ).reshape(-1,1), A )
i = 0 ; status = -1
while i < niter :
... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Let's check how the Markov Clustering Algorithm fares against $k$-clique, and vertex labelling. | fig = plt.figure( figsize = (12, 8) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black', title = "Master cluster", )
A = nx.to_numpy_matrix( H, dtype = np.float, nodelist = nx.spectral_ordering( H ) )
C, _ = mcl_iter( A )
mcl = extract_communities( C, lengths = False)
axs.spy( A, color = "gold", markersize = 15, marker = ... | year_14_15/spring_2015/netwrok_analysis/notebooks/assignments/networks_ha_final.ipynb | ivannz/study_notes | mit |
Quiz Question: What is the Euclidean distance between the query house and the 10th house of the training set? | print(features_test[0])
print(features_train[9])
import math
def get_distance(vec1, vec2):
return math.sqrt(np.sum((vec1 - vec2)**2))
get_distance(features_test[0], features_train[9]) | ml-regression/week 6/K-NN.ipynb | isendel/machine-learning | apache-2.0 |
Quiz Question: Among the first 10 training houses, which house is the closest to the query house? | min_distance = None
closest_house = None
for i, train_house in enumerate(features_train[0:10]):
dist = get_distance(features_test[0], train_house)
if i == 0 or dist < min_distance:
min_distance = dist
closest_house = i
print(min_distance)
print(closest_house)
diff = features_train - features_t... | ml-regression/week 6/K-NN.ipynb | isendel/machine-learning | apache-2.0 |
17. Quiz Question: What is the predicted value of the query house based on 1-nearest neighbor regression? | distances = compute_distances(features_train, features_test[2])
print(distances)
print(np.argmin(distances))
np.where(distances == min(distances))
distances[1149]
def k_nearest_neighbors(k, feature_train, features_query):
distances = compute_distances(features_train, features_query)
return distances, np.args... | ml-regression/week 6/K-NN.ipynb | isendel/machine-learning | apache-2.0 |
Introduction to networkx
Network Basics
Networks, a.k.a. graphs, are an immensely useful modeling tool to model complex relational problems. Networks are comprised of two main entities:
Nodes: commonly represented as circles. In the academic literature, nodes are also known as "vertices".
Edges: commonly represented a... | G = nx.read_gpickle('Synthetic Social Network.pkl')
# .nodes() gives you what nodes (a list) are represented in the network
# here we access the number of nodes
print(len(G.nodes()))
# or equivalently
print(len(G))
# Who is connected to who in the network?
# the edges are represented as a list of tuples,
# where eac... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Concept
A network, more technically known as a graph, is comprised of:
a set of nodes
joined by a set of edges
They can be represented as two lists:
A node list: a list of 2-tuples where the first element of each tuple is the representation of the node, and the second element is a dictionary of metadata associated w... | # networkx will return a list of tuples in the form (node_id, attribute_dictionary)
print(G.nodes(data = True)[:5])
# excercise: Count how many males and females are represented in the graph
from collections import Counter
sex = [d['sex'] for _, d in G.nodes(data = True)]
Counter(sex) | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Edges can also store attributes in their attribute dictionary. Here the attribute is a datetime object representing the datetime in which the edges were created. | G.edges(data = True)[:4]
# excercise: figure out the range of dates during which these relationships were forged?
# Specifically, compute the earliest and last date
dates = [d['date'] for _, _, d in G.edges(data = True)]
print(min(dates))
print(max(dates)) | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Exercise
We found out that there are two individuals that we left out of the network, individual no. 31 and 32. They are one male (31) and one female (32), their ages are 22 and 24 respectively, they knew each other on 2010-01-09, and together, they both knew individual 7, on 2009-12-11. Use the functions G.add_node() ... | G.add_node(31, age = 22, sex = 'Male')
G.add_node(32, age = 24, sex = 'Female')
G.add_edge(31, 32, date = datetime(2010, 1, 9))
G.add_edge(31, 7, date = datetime(2009, 12, 11))
G.add_edge(32, 7, date = datetime(2009, 12, 11))
def test_graph_integrity(G):
"""verify that the implementation above is correct"""
as... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Note that networkx will override the old data if you added duplicated ones. e.g. we start out with G.add_node(31, age = 22, sex = 'Male'), if we had another call G.add_node(31, age = 25, sex = 'Male'), then the age for node 31 will be 25.
Coding Patterns
These are some recommended coding patterns when doing network ana... | plt.rcParams['figure.figsize'] = 8, 6
nx.draw(G, with_labels = True) | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Another way is to use a matrix to represent them. This is done by using the nx.to_numpy_matrix(G) function. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes.
We then use matplotlib's pcolor(numpy_array) function to plot. Because pcolor cannot take in numpy matrices, we will ... | matrix = nx.to_numpy_matrix(G)
plt.pcolor(np.array(matrix))
plt.axes().set_aspect('equal') # set aspect ratio equal to get a square visualization
plt.xlim(min(G.nodes()), max(G.nodes())) # set x and y limits to the number of nodes present.
plt.ylim(min(G.nodes()), max(G.nodes()))
plt.title('Adjacency Matrix')
plt.show... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Hubs
How do we evaluate the importance of some individuals in a network?
Within a social network, there will be certain individuals which perform certain important functions. For example, there may be hyper-connected individuals who are connected to many, many more people. They would be of use in the spreading of infor... | # re-load the pickled data without the new individuals added in the introduction
G = nx.read_gpickle('Synthetic Social Network.pkl')
# the number of neighbors that individual #19 has
len(G.neighbors(19))
# create a ranked list of the importance of each individual,
# based on the number of neighbors they have?
node_n... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Approach 2: Degree Centrality
The number of other nodes that one node is connected to is a measure of its centrality. networkx implements a degree centrality, which is defined as the number of neighbors that a node has normalized to the number of individuals it could be connected to in the entire graph. This is accesse... | print(nx.degree_centrality(G)[19])
# confirm by manual calculating
# remember to -1 to exclude itself to exclude self-loops,
# note that in some places it make senses to have self-loops ( e.g. bike routes )
print(len(G.neighbors(19)) / (len( G.nodes() ) - 1)) | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Degree centrality and the number of neighbors is strongly related as they are both measuring whether a given node is a hub or not. By identifying the hub (e.g. linkedin influencer, the source that's spreading the disease) we can take actions on it to create value or prevent catastrophes. | # exercise: create a histogram of the distribution of degree centralities
centrality = list(nx.degree_centrality(G).values())
plt.hist(centrality)
plt.title('degree centralities')
plt.show()
# excercise: create a histogram of the distribution of number of neighbors
neighbor = [len(G.neighbors(n)) for n in G]
plt.his... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Paths in a Network
Graph traversal is akin to walking along the graph, node by node, restricted by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure (e.g. connectivity, retrieving the exact relationships) of certain portions of the graph and for finding paths... | from collections import deque
def path_exists(G, source, target):
"""checks whether a path exists between two nodes (node1, node2) in graph G"""
if not G.has_node(source):
raise ValueError('Source node {} not in graph'.format(source))
if not G.has_node(target):
raise ValueError('Targe... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Meanwhile... thankfully, networkx has a function for us to use, titled has_path, so we don't have to implement this on our own. :-) | nx.has_path(G = G, source = 29, target = 26) | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
networkx also has other shortest path algorithms implemented. e.g. nx.shortest_path(G, source, target) gives us a list of nodes that exist within one of the shortest paths between the two nodes.
We can build upon these to build our own graph query functions. Let's see if we can trace the shortest path from one node to... | nx.shortest_path(G, 4, 14)
def extract_path_edges(G, source, target):
new_G = None
if nx.has_path(G, source, target):
nodes_of_interest = nx.shortest_path(G, source, target)
new_G = G.subgraph(nodes_of_interest)
return new_G
source = 4
target = 14
new_G = extract_path_edges(G, source... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Hubs Revisited
It looks like individual 19 is an important person of some sorts - if a message has to be passed through the network in the shortest time possible, then usually it'll go through person 19. Such a person has a high betweenness centrality. This is implemented as one of NetworkX's centrality algorithms.
Not... | nx.betweenness_centrality(G, normalized = False)[19] | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square. You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other v... | # reload the network
G = nx.read_gpickle('Synthetic Social Network.pkl')
nx.draw(G, with_labels = True) | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Exercise
Write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with.
Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship. | def get_triangles(G, node):
# store all the data points that are in a triangle
# include the targeted node to draw sub-graph later
triangles = set([node])
neighbors1 = set(G.neighbors(node))
for n in neighbors1:
# if the target node is in a triangle relationship, then
#... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Friend Recommendation: Open Triangles
Let's see if we can do some friend recommendations by looking for open triangles. Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph. | def get_open_triangles(G, node):
# the target node's neighbor's neighbor's neighbor's should
# not include the target node
open_triangles = []
neighbors1 = set(G.neighbors(node))
for node1 in neighbors1:
# remove the target node from the target node's neighbor's
# neighbor's, since ... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
Tables to Networks, Networks to Tables
Networks can be represented in a tabular form in two ways: As an adjacency list with edge attributes stored as columnar values, and as a node list with node attributes stored as columnar values.
Storing the network data as a single massive adjacency table, with node attributes rep... | stations = pd.read_csv(
'divvy_2013/Divvy_Stations_2013.csv',
parse_dates = ['online date'],
index_col = 'id',
encoding = 'utf-8'
)
# the id represents the node
stations.head()
trips = pd.read_csv(
'divvy_2013/Divvy_Trips_2013.csv',
parse_dates = ['starttime', 'stoptime'],
index_col = ... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
At this point, we have our stations and trips data loaded into memory.
How we construct the graph depends on the kind of questions we want to answer, which makes the definition of the "unit of consideration" (or the entities for which we are trying to model their relationships) is extremely important.
Let's try to an... | # call the pandas DataFrame row-by-row iterator, which
# iterates through the index, and columns
G = nx.DiGraph()
for n, d in stations.iterrows():
G.add_node(n, attr_dict = d.to_dict())
# use groupby to retrieve the pair of nodes and the data count
for (start, stop), d in trips.groupby(['from_station_id', 'to... | networkx/networkx.ipynb | ethen8181/machine-learning | mit |
19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Piecewise Parabolic method")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm2m/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Western boundary enhanced background plus weak laplacian")
| notebooks/noaa-gfdl/cmip6/models/gfdl-esm2m/ocean.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.