markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
SALib analysis tools SALib is a python based library to perform sensitivity analysis. So far, it contains the following analysis tools : - FAST - Fourier Amplitude Sensitivity Test - RBD-FAST - Random Balance Designs Fourier Amplitude Sensitivity Test - Method of Morris - Sobol Sensitivity Analysis - Delta Moment-Inde...
from SALib.analyze import rbd_fast # Let us look at the analyze method rbd_fast.analyze(problem=problem, Y=ishigami_results, X=all_samples)
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
It is a dictionnary with a single key 'S1', corresponding to a list of 6 items. <br> --> First order indices of all 6 input variables
# storing the first order indices of the analyze method si1 = rbd_fast.analyze(problem=problem, Y=ishigami_results, X=all_samples)['S1'] # make nice plots with the indices (looks good on your presentations) # do not use the plotting tools of SALib, they are made for the method of Morris ... fig, ax = plt.subplots() f...
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Event without running the sensitivity analysis with the analyze module, this graph shows us some strong non-linearities (where blue is very near red). This only tells us that we should study this model with more samples, 150 is not enough. Surprise 5th module Basic convergence check : NOT in SALib BUT definitely mandat...
def conv_study(n, Y, X): # take n samples among the num_samples, without replacement subset = np.random.choice(num_samples, size=n, replace=False) return rbd_fast.analyze(problem=problem, Y=Y[subset], X=X[subset])['S1'] all_indices = np.array([conv_st...
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Last but not least the BOOTSTRAP principle : select a subset of n_sub samples, with n_sub < num_samples (say on 200 over 300) and perform the sensitivity analysis on that subset. Repeat 1000 times that operation. The indices will vary : the bigger they vary the larger the influence of the samples (aka some of the sampl...
def bootstrap(problem, Y, X): """ Calculate confidence intervals of rbd-fast indices 1000 draws returns 95% confidence intervals of the 1000 indices problem : dictionnary as SALib uses it X : SA input(s) Y : SA output(s) """ all_indices = [] for i in range(1000): ...
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
2.3. Fourier Series<a id='math:sec:fourier_series'></a> While Fourier series are not immediately required to understand the required calculus for this book, they are closely connected to the Fourier transform, which is an essential tool. Moreover, we noticed a few times that the principle of the harmonic analysis or ha...
def FS_coeffs(x, m, func, T=2.0*np.pi): """ Computes Fourier series (FS) coeffs of func Input: x = input vector at which to evaluate func m = the order of the coefficient func = the function to find the FS of T = the period of func (defaults to 2 pi) """ # Evaluate t...
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
That should be good enough for our purposes here. Next we create a function to sum the Fourier series.
def FS_sum(x, m, func, period=None): # If no period is specified use entire domain if period is None: period = np.abs(x.max() - x.min()) # Evaluate the coefficients and sum the series f_F = np.zeros(x.size, dtype=np.complex128) for i in range(-m,m+1): am = FS_coeffs(x, i, func, ...
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Let's see what happens if we decompose a square wave.
# define square wave function def square_wave(x): I = np.argwhere(np.abs(x) <= 0.5) tmp = np.zeros(x.size) tmp[I] = 1.0 return tmp # Set domain and compute square wave N = 250 x = np.linspace(-1.0,1.0,N) # Compute the FS up to order m m = 10 sw_F = FS_sum(x, m, square_wave, period=2.0) # Plot result ...
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Figure 2.8.1: Approximating a function with a finite number of Fourier series coefficients. As can be seen from the figure, the Fourier series approximates the square wave. However at such a low order (i.e. $m = 10$) it doesn't do a very good job. Actually an infinite number of Fourier series coefficients are required ...
def inter_FS(x,m,func,T): f_F = FS_sum(x, m, func, period=T) plt.plot(x,f_F.real,'b') plt.plot(x,func(x),'g') interact(lambda m,T:inter_FS(x=np.linspace(-1.0,1.0,N),m=m,func=square_wave,T=T), m=(5,100,1),T=(0,2*np.pi,0.5)) and None # <a id='math:fig:fou_decomp_inter'></a><!--\label{mat...
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
From the Thorlabs website: https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=1000 Read in the filter curve, fc
fc = pd.read_excel("../data/FB1250-10.xlsx", sheetname='Transmission Data', parse_cols=[2,3,4], skipfooter=2) fc.tail() fc.columns
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Normalize the transmission
fc['wavelength'] = fc['Wavelength (nm)'] fc['transmission'] = fc['% Transmission']/fc['% Transmission'].max()
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Drop wavelengths shorter than 1200 nm since they are absorbed.
fc.drop(fc.index[fc.wavelength < 1150], inplace=True) sns.set_context('notebook', font_scale=1.5)
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Construct a model.
import etalon as etalon np.random.seed(78704) fc.wavelength.values n1 = etalon.sellmeier_Si(fc.wavelength.values) dsp = etalon.T_gap_Si_fast(fc.wavelength, 0.0, n1) sns.set_context('paper', font_scale=1.6) sns.set_style('ticks') model_absolute = etalon.T_gap_Si_fast(fc.wavelength, 50.0, n1) model = model_absolute/...
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Plot the integrated flux for a variety of gap sizes. Define an integral function.
fc.transmission_norm = fc.transmission/fc.transmission.sum() integrate_flux = lambda x: (x * fc.transmission_norm).sum()
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Small gaps.
gap_sizes = np.arange(0, 50, 2) gap_trans = [integrate_flux(etalon.T_gap_Si_fast(fc.wavelength, gap_size, n1)/dsp) for gap_size in gap_sizes] sns.set_context('paper', font_scale=1.6) sns.set_style('ticks') plt.plot(gap_sizes, gap_trans, 's', label='Integrated transmission') plt.xlabel('Gap axial extent $d$ (nm)') plt...
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
2. Load SST data 2.1 Load time series SST Select the region (40°–50°N, 150°–135°W) and the period(1981-2016)
ds = xr.open_dataset('data\sst.mnmean.v5.nc') sst = ds.sst.sel(lat=slice(50, 40), lon=slice(190, 240), time=slice('1981-01-01','2015-12-31')) #sst.mean(dim='time').plot()
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.2 Calculate climatology between 1981-2010
sst_clm = sst.sel(time=slice('1981-01-01','2010-12-31')).groupby('time.month').mean(dim='time') #sst_clm = sst.groupby('time.month').mean(dim='time')
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.3 Calculate SSTA
sst_anom = sst.groupby('time.month') - sst_clm sst_anom_mean = sst_anom.mean(dim=('lon', 'lat'), skipna=True)
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
3. Carry out EMD analysis
S = sst_anom_mean.values t = sst.time.values # Assign EEMD to `eemd` variable eemd = EEMD() # Execute EEMD on S eIMFs = eemd.eemd(S)
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4. Visualize 4.1 Plot IMFs
nIMFs = eIMFs.shape[0] plt.figure(figsize=(11,20)) plt.subplot(nIMFs+1, 1, 1) # plot original data plt.plot(t, S, 'r') # plot IMFs for n in range(nIMFs): plt.subplot(nIMFs+1, 1, n+2) plt.plot(t, eIMFs[n], 'g') plt.ylabel("eIMF %i" %(n+1)) plt.locator_params(axis='y', nbins=5) plt.xlabel("Time [s]")
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4.2 Error of reconstruction
reconstructed = eIMFs.sum(axis=0) plt.plot(t, reconstructed-S)
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
Docstrings
A.__doc__ help(A) A.report.__doc__
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Creating an instance of a class Example of a class without repr.
class X: """Empty class.""" x = X() print(x)
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Create new instances of the class A
a0 = A('a') print(a0) a1 = A(x = 3.14) print(a1)
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Attribute access
a0.x, a1.x
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Method access
a0.report(), a1.report()
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Class inheritance
class B(A): """Derived class inherits from A.""" def report(self): """Overwrite report() method of A.""" return self.x B.__doc__
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Create new instances of class B
b0 = B(3 + 4j) b1 = B(x = a1)
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Attribute access
b0.x b1.x
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Method access
b1.report()
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Nested attribute access
b1.x.report()
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
2. Write a function that given a model calculates $P(B|x_1,\dots,x_n)$
def p1(model, X): ''' model: a dictionary with the model probabilities. X: a list with x_i values [x_1, x_2, ... , x_n] Returns: the probability P(B = 1 | x_1, x_2, ... , x_n) ''' return 0
exam_is.ipynb
fagonzalezo/is-2016-1
mit
3. Write a function that given a model calculates $P(A|x_1,\dots,x_n)$
def p2(model, X): ''' model: a dictionary with the model probabilities. X: a list with x_i values [x_1, x_2, ... , x_n] Returns: the probability P(A = 1 | x_1, x_2, ... , x_n) ''' return 0
exam_is.ipynb
fagonzalezo/is-2016-1
mit
4. Write a function that given a model calculates $P(A|x_1, x_n)$
def p3(model, x_1, x_n): ''' model: a dictionary with the model probabilities. x_1, x_n: x values Returns: the probability P(A = 1 | x_1, x_n) ''' return 0
exam_is.ipynb
fagonzalezo/is-2016-1
mit
Total notifications This data covers all the users who received at least one notification during the month, whether they actually visited the site during month or not, so we'd expect that the numbers are dominated by a large bulk of users with very few notifications, and that there's a long tail of very few users with ...
def beyond_threshold(df, wikis, threshold, direction): columns = [ "wiki", "users", "% of users", "% of notifications" ] results = [] for wiki in wikis: by_wiki = filter_by_wiki(df, wiki) total_users = by_wiki.iloc[:, 1].sum() total_notifs = 0 ...
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
5 or more?
beyond_threshold(notifs, wikis, 4, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
And what percent of users got 25 notifications or more—becoming more or less "daily notified"?
beyond_threshold(notifs, wikis, 24, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
That's lower than I expected at English Wikipedia. It only had about 1,200 users with at least 30 notifications per month, compared to 3,500 highly active users (100+ edits) per month. However, both Flow wikis have higher percentages than the non-Flow wikis. Now, let's look at the actual distributions. To make it easie...
beyond_threshold(notifs, wikis, 99, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
Graphs
fig, axarr = plt.subplots( 5, 1, figsize=(12,30) ) fig.suptitle("Total notifications per user", fontsize=24) fig.subplots_adjust(top=0.95) i = 0 for wiki in wikis: plot_by_wiki(notifs, wiki, ax = axarr[i]) i = i + 1
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
So, as expected, all the wikis have a pretty regular power-law distribution of notifications. Unread notifications First, the counts and percentages for various levels of unread notifications. Under 5
beyond_threshold(unreads, wikis, 5, "under")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
5 or more
beyond_threshold(unreads, wikis, 4, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
25 or more
beyond_threshold(unreads, wikis, 24, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
100 or more
beyond_threshold(unreads, wikis, 99, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
Histograms
fig, axarr = plt.subplots( 5, 1, figsize=(12,30) ) fig.suptitle("Unread notifications per user", fontsize=24) fig.subplots_adjust(top=0.95) i = 0 for wiki in wikis: plot_by_wiki(unreads, wiki, ax = axarr[i]) i = i + 1
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
Preparing the Dataset Load dataset from tab-seperated text file Dataset contains three columns: feature 1, feature 2, and class labels Dataset contains 100 entries sorted by class labels, 50 examples from each class
data = np.genfromtxt('perceptron_toydata.txt', delimiter='\t') X, y = data[:, :2], data[:, 2] y = y.astype(np.int) print('Class label counts:', np.bincount(y)) plt.scatter(X[y==0, 0], X[y==0, 1], label='class 0', marker='o') plt.scatter(X[y==1, 0], X[y==1, 1], label='class 1', marker='s') plt.xlabel('feature 1') plt....
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Shuffle dataset Split dataset into 70% training and 30% test data Seed random number generator for reproducibility
shuffle_idx = np.arange(y.shape[0]) shuffle_rng = np.random.RandomState(123) shuffle_rng.shuffle(shuffle_idx) X, y = X[shuffle_idx], y[shuffle_idx] X_train, X_test = X[shuffle_idx[:70]], X[shuffle_idx[70:]] y_train, y_test = y[shuffle_idx[:70]], y[shuffle_idx[70:]]
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Standardize training and test datasets (mean zero, unit variance)
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0) X_train = (X_train - mu) / sigma X_test = (X_test - mu) / sigma
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Check dataset (here: training dataset) after preprocessing steps
plt.scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o') plt.scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s') plt.xlabel('feature 1') plt.ylabel('feature 2') plt.legend() plt.show()
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Implementing a Perceptron in NumPy Implement function for perceptron training in NumPy
def perceptron_train(features, targets, mparams=None, zero_weights=True, learning_rate=1., seed=None): """Perceptron training function for binary class labels Parameters ---------- features : numpy.ndarray, shape=(n_samples, m_features) A 2D NumPy array containing the train...
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Train the perceptron for 2 epochs
model_params = perceptron_train(X_train, y_train, mparams=None, zero_weights=True) for _ in range(2): _ = perceptron_train(X_train, y_train, mparams=model_params)
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Implement a function for perceptron predictions in NumPy
def perceptron_predict(features, mparams): """Perceptron prediction function for binary class labels Parameters ---------- features : numpy.ndarray, shape=(n_samples, m_features) A 2D NumPy array containing the training examples mparams : dict The model parameters aof the perceptro...
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Compute training and test error
train_errors = np.sum(perceptron_predict(X_train, model_params) != y_train) test_errors = np.sum(perceptron_predict(X_test, model_params) != y_test) print('Number of training errors', train_errors) print('Number of test errors', test_errors)
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Visualize the decision boundary Perceptron is a linear function with threshold $$w_{1}x_{1} + w_{2}x_{2} + b \geq 0.$$ We can rearrange this equation as follows: $$w_{1}x_{1} + b \geq 0 - w_{2}x_{2}$$ $$- \frac{w_{1}x_{1}}{{w_2}} - \frac{b}{w_2} \leq x_{2}$$
x_min = -2 y_min = ( -(model_params['weights'][0] * x_min) / model_params['weights'][1] -(model_params['bias'] / model_params['weights'][1]) ) x_max = 2 y_max = ( -(model_params['weights'][0] * x_max) / model_params['weights'][1] -(model_params['bias'] / model_params['weights'][1]) ) fig, ax = pl...
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Suggested exercises Train a zero-weight perceptron with different learning rates and compare the model parameters and decision boundaries to each other. What do you observe? Repeat the previous exercise with randomly initialized weights.
# %load solutions/01_weight_zero_learning_rate.py # %load solutions/02_random_weights_learning_rate.py
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Implementing a Perceptron in TensorFlow Setting up the perceptron graph
g = tf.Graph() n_features = X_train.shape[1] with g.as_default() as g: # initialize model parameters features = tf.placeholder(dtype=tf.float32, shape=[None, n_features], name='features') targets = tf.placeholder(dtype=tf.float32, shape=[N...
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Training the perceptron for 5 training samples for illustration purposes
with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) i = 0 for example, target in zip(X_train, y_train): feed_dict = {features: example.reshape(-1, n_features), targets: target.reshape(-1, 1)} _, _ = sess.run([weight_update, bias_update...
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Continue training of the graph after restoring the session from a local checkpoint (this can be useful if we have to interrupt out computational session) Now train a complete epoch
with tf.Session(graph=g) as sess: saver.restore(sess, os.path.abspath('perceptron')) for epoch in range(1): for example, target in zip(X_train, y_train): feed_dict = {features: example.reshape(-1, n_features), targets: target.reshape(-1, 1)} _, _ = sess....
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Suggested Exercises 3) Plot the decision boundary for this TensorFlow perceptron. Why do you think the TensorFlow implementation performs better than our NumPy implementation on the test set? - Hint 1: you can re-use the code that we used in the NumPy section - Hint 2: since the bias is a 2D array, you need to acces...
# %load solutions/03_tensorflow-boundary.py
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Theoretically, we could restart the Jupyter notebook now (we would just have to prepare the dataset again then, though) We are going to restore the session from a meta graph (notice "tf.Session()") First, we have to load the datasets again
with tf.Session() as sess: saver = tf.train.import_meta_graph(os.path.abspath('perceptron.meta')) saver.restore(sess, os.path.abspath('perceptron')) pred = sess.run('prediction:0', feed_dict={'features:0': X_train}) train_errors = np.sum(pred.reshape(-1) != y_train) pred = sess.run('predic...
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Note you also have access to a quicker shortcut for adding weight to a layer: the add_weight() method:
# TODO # Use `add_weight()` method for adding weight to a layer class Linear(keras.layers.Layer): def __init__(self, units=32, input_dim=32): super(Linear, self).__init__() self.w = self.add_weight( shape=(input_dim, units), initializer="random_normal", trainable=True ) s...
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In many cases, you may not know in advance the size of your inputs, and you would like to lazily create weights when that value becomes known, some time after instantiating the layer. In the Keras API, we recommend creating layer weights in the build(self, input_shape) method of your layer. Like this:
# TODO class Linear(keras.layers.Layer): def __init__(self, units=32): super(Linear, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=# TODO: Your code goes here, initializer="random_normal", trainable=...
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Layers are recursively composable If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights of the inner layer. We recommend creating such sublayers in the __init__() method (since the sublayers will typically have a build method, they will be built when the outer ...
# TODO # Let's assume we are reusing the Linear class # with a `build` method that we defined above. class MLPBlock(keras.layers.Layer): def __init__(self): super(MLPBlock, self).__init__() self.linear_1 = Linear(32) self.linear_2 = Linear(32) self.linear_3 = Linear(1) def cal...
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
These losses (including those created by any inner layer) can be retrieved via layer.losses. This property is reset at the start of every __call__() to the top-level layer, so that layer.losses always contains the loss values created during the last forward pass.
# TODO class OuterLayer(keras.layers.Layer): def __init__(self): super(OuterLayer, self).__init__() self.activity_reg = # TODO: Your code goes here def call(self, inputs): return self.activity_reg(inputs) layer = OuterLayer() assert len(layer.losses) == 0 # No losses yet since the la...
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The add_metric() method Similarly to add_loss(), layers also have an add_metric() method for tracking the moving average of a quantity during training. Consider the following layer: a "logistic endpoint" layer. It takes as inputs predictions & targets, it computes a loss which it tracks via add_loss(), and it computes ...
# TODO class LogisticEndpoint(keras.layers.Layer): def __init__(self, name=None): super(LogisticEndpoint, self).__init__(name=name) self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) self.accuracy_fn = keras.metrics.BinaryAccuracy() def call(self, targets, logits, sample_w...
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
You can optionally enable serialization on your layers If you need your custom layers to be serializable as part of a Functional model, you can optionally implement a get_config() method:
# TODO class Linear(keras.layers.Layer): def __init__(self, units=32): super(Linear, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainabl...
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The paragraph below was lifted from #2 and describes how the humanized gene(s) were made with a series of overlapping oligonucelotides. Two primers are described that were used to amplify the synthetic gene in order to clone it in the pBS vector. We can assume that the GFP sequence is identical to #3. The final product...
from pydna.genbank import Genbank from pydna.parsers import parse_primers from pydna.amplify import pcr from pydna.readers import read from pydna.gel import Gel
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
we get the gfp gene from Genbank according to #2
gb = Genbank("bjornjobb@gmail.com") humanized_gfp_gene = gb.nucleotide('U50963')
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The upstream and downstream primers were described in #2
up, dp = parse_primers(''' >upstream_primer TGCTCTAGAGCGGCCGCCGCCACCATGAGCAAGGGCGAGGAACTG >downstream_primer CGGAAGCTTGCGGCCGCTCACTTGTACAGCTCGTCCAT''') humanized_gfp_product = pcr(up, dp, humanized_gfp_gene) humanized_gfp_product humanized_gfp_product.figure()
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The PCR product contain the entire GFP coding sequence as expected. The PCR product was digested with XbaI and HindIII. We import the restriction enzymes from BioPython:
from Bio.Restriction import XbaI, HindIII stuffer, gene_fragment, stuffer = humanized_gfp_product.cut(XbaI, HindIII) gene_fragment
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The gene fragment has the expected size and sticky ends:
gene_fragment.seq
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The pBS(+) plasmid is also known as [BlueScribe](https://www.snapgene.com/resources/plasmid_files/basic_cloning_vectors/pBS(+) which is available from Genbank under L08783.
pBSplus = gb.nucleotide("L08783") stuffer, pBS_lin = pBSplus.cut(XbaI, HindIII) stuffer, pBS_lin
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
A small 28 bp stuffer fragment is lost upon digestion:
pBS_GFPH1 = (pBS_lin+gene_fragment).looped()
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The pBS_GFPH1 plasmid is 3926 bp long
pBS_GFPH1
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The second paragraph in the Materials section in #2 is harder to follow. The first part describes the construction of a vector with wild type GFP which is substituted for the humanized GFP in the end. Plasmids referenced are: 1. TU#65 2. pCMVb 3. pRc/CMV 4. pTRBR The TU#65 plasmid is described in: Chalfie, M., Y. Tu, G...
import requests from lxml import html r = requests.get('https://www.addgene.org/13744/sequences/') tree = html.fromstring(r.text) rawdata_addgene_full_sequence = tree.xpath(".//*[@id='depositor-full']") pGL_MLKif3B = read( rawdata_addgene_full_sequence[0].text_content() ).looped() rawdata_addgene_partial_sequence = tre...
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The sequence seems to have the correct size:
pGL_MLKif3B
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
We cut out the NotI fragment
from Bio.Restriction import NotI Kif3b_GFP, pGL_backbone = pGL_MLKif3B.cut(NotI) Kif3b_GFP, pGL_backbone
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The remaining backbone sequence is 4304 bp. We cut out the NotI GFP cassette from pBS_GFPH1
humanized_gfp_NotI_frag, pBS_bb = pBS_GFPH1.cut(NotI) humanized_gfp_NotI_frag, pBS_bb
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
Then we combine the backbone from pGL_MLKif3B (4304) with the insert from pBS_GFPH1 (736)
pGreenLantern1 = (pGL_backbone + humanized_gfp_NotI_frag).looped()
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
the sequence seems to have roughly the correct size (5kb):
pGreenLantern1 pGreenLantern1.locus="pGreenLantern1"
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The candidate for the pGreenLantern1 sequence can be downloaded from the link below.
pGreenLantern1.write("pGreenLantern1.gb")
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The plasmid map below is from the patent (#4)
from IPython.display import Image Image("https://patentimages.storage.googleapis.com/US6638732B1/US06638732-20031028-D00005.png", width=300)
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The map below was made with plasmapper and it corresponds roughly to the map from the patent above.
Image("plasMap203_1479761881670.png", width=600)
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
Check sequence by restriction digest If the actual plasmid is available, the sequence assembled here can be compared by restriction analysis to the acutal vector. NdeI cuts the sequence three times prodcing fragments that are easy to distinguish by gel.
from Bio.Restriction import NdeI fragments = pGreenLantern1.cut(NdeI) fragments from pydna.gel import weight_standard_sample #PYTEST_VALIDATE_IGNORE_OUTPUT %matplotlib inline gel=Gel([weight_standard_sample('1kb+_GeneRuler'), fragments]) gel.run() Image("http://static.wixstatic.com/media/5be0cc_b35636c46e654d8b8c0...
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
Next, we download a data file containing spike train data from multiple trials of two neurons.
# Download data !wget -Nq https://github.com/INM-6/elephant-tutorial-data/raw/master/dataset-1/dataset-1.h5
doc/tutorials/unitary_event_analysis.ipynb
apdavison/elephant
bsd-3-clause
Write a plotting function
def plot_UE(data,Js_dict,Js_sig,binsize,winsize,winstep, pat,N,t_winpos,**kwargs): """ Examples: --------- dict_args = {'events':{'SO':[100*pq.ms]}, 'save_fig': True, 'path_filename_format':'UE1.pdf', 'showfig':True, 'suptitle':True, 'figsize':(12,10), 'unit_ids':[10, 19, 20...
doc/tutorials/unitary_event_analysis.ipynb
apdavison/elephant
bsd-3-clause
Calculate Unitary Events
UE = ue.jointJ_window_analysis( spiketrains, binsize=5*pq.ms, winsize=100*pq.ms, winstep=10*pq.ms, pattern_hash=[3]) plot_UE( spiketrains, UE, ue.jointJ(0.05),binsize=5*pq.ms,winsize=100*pq.ms,winstep=10*pq.ms, pat=ue.inverse_hash_from_pattern([3], N=2), N=2, t_winpos=ue._winpos(0*pq.ms,spi...
doc/tutorials/unitary_event_analysis.ipynb
apdavison/elephant
bsd-3-clause
To use the Riot api, one more important thing to do is to get your own API key. API key can be obtained from here. Note that normal developr API key has a narrow request limit, whereas production API key for commercial use has a looser requirement of request limit. For now, we are just gonna use the normal API key for ...
config = { 'key': 'API_key', }
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
<a name="architecture"></a>Project Architecture <a name="crawl"></a>Data Crawling The architecture for data crawler is shown as follow: The process of crawling data could be simplified as follows: 1) Get summoners list from LOL server; 2) For each summoner, get his/her top 3 frequently played champions; 3) Fetch each...
class RiotCrawler: def __init__(self, key): self.key = key self.w = RiotWatcher(key) self.tiers = { 'bronze': [], 'silver': [], 'gold': [], 'platinum': [], 'diamond': [], 'challenger': [], 'master': [], ...
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
get_tier will return a divisioin dictionary, whose keys are the tier name, and values are the summoner id list in each tier. The results are printed in a human-readable format, categorized by tier.
def get_tier(): # challenger: 77759242 # platinum: 53381 # gold: 70359816 # silver: 65213225 # bronze: 22309680 # master: 22551130 # diamond: 34570626 player_ids = [70359816, 77759242, 53381, 65213225, 22309680, 22551130, 34570626] riot_crawler = RiotCrawler(config['key']) for pl...
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
<a name="mfpChampions"></a>2. Fetch most frequently played champions Since we already had a dictionary of all user ids mapping to all categories of ranks, we can now use those user ids to get the stats data of their most frequently used champions. We will use the raw RESTful APIs of Riot with python here. And here are ...
import csv import json import os import urllib2
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
Then we can move on and fetch the data we need. Riot gives us the API to get all champions that a user had used during the season. And the response will be in JSON format. After parsing the JSON response, what we need to do is to get the most frequently used champions which can represent a player's level. So we sort th...
class TopChampion: FIELD_NAMES = ['totalSessionsPlayed', 'totalSessionsLost', 'totalSessionsWon', 'totalChampionKills', 'totalDamageDealt', 'totalDamageTaken', 'mostChampionKillsPerSession', 'totalMinionKills', 'totalDoubleKills', 'totalTripleKills', 'totalQ...
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
With the above class, now we can start crawling the stats data of all champions saving them to csv files by the following code. Notice that this process is pretty slow since we added the sleep methods in our code. Riot APIs have a limitation on the API calls rate. You cannot send more than 500 requests per 10 minutes. ...
def main(): import time tiers = get_tier() for tier, rank_dict in tiers.iteritems(): print 'starting tier: {}'.format(tier) for summoner_id in rank_dict: print 'tier: {}, summoner id: {}'.format(tier, summoner_id) top_champion = TopChampion(config['key'], summoner_id,...
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
Vertex SDK: Custom training tabular regression model for batch prediction with explainabilty <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb"> <img src="h...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_custom_tabular_regression_batch_explain.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Wait for completion of batch prediction job Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
if not os.environ["IS_TESTING"]: batch_predict_job.wait()
notebooks/community/sdk/sdk_custom_tabular_regression_batch_explain.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the explanations Next, get the explanation results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file cont...
if not os.environ["IS_TESTING"]: import tensorflow as tf bp_iter_outputs = batch_predict_job.iter_outputs() explanation_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("explanation"): explanation_results.append(blob.name) tags = list() ...
notebooks/community/sdk/sdk_custom_tabular_regression_batch_explain.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Look at Pandas Dataframes this is italicized
df = pd.read_csv("../data/coal_prod_cleaned.csv") df.head() df.shape # import qgrid # Put imports at the top # qgrid.nbinstall(overwrite=True) # qgrid.show_grid(df[['MSHA_ID', # 'Year', # 'Mine_Name', # 'Mine_State', # 'Mine_County']], ...
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Pivot Tables w/ pandas http://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/
!conda install pivottablejs -y df = pd.read_csv("../data/mps.csv", encoding="ISO-8859-1") df.head(10)
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Tab
import numpy as np np.random.
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
shift-tab
np.linspace(start=, )
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
shift-tab-tab (equivalent in in Lab to shift-tab)
np.linspace(50, 150, num=100,)
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
shift-tab-tab-tab-tab (doesn't work in lab)
np.linspace(start=, )
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit