markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
SALib analysis tools SALib is a python based library to perform sensitivity analysis. So far, it contains the following analysis tools : - FAST - Fourier Amplitude Sensitivity Test - RBD-FAST - Random Balance Designs Fourier Amplitude Sensitivity Test - Method of Morris - Sobol Sensitivity Analysis - Delta Moment-Independent Measure - Derivative-based Global Sensitivity Measure (DGSM) - Fractional Factorial RBD-FAST Reminder : Why do we want to use RBD-FAST ?? - not only does it robustly calculates sensitvity indices as accurately as a Sobol method with much fewer samples - but also it is based on a LH sampling : the output itself is exploitable and interesting to look at (unlike Morris method where the samples used for the calculation of the indices are useless afterwards).
from SALib.analyze import rbd_fast # Let us look at the analyze method rbd_fast.analyze(problem=problem, Y=ishigami_results, X=all_samples)
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
It is a dictionnary with a single key 'S1', corresponding to a list of 6 items. <br> --> First order indices of all 6 input variables
# storing the first order indices of the analyze method si1 = rbd_fast.analyze(problem=problem, Y=ishigami_results, X=all_samples)['S1'] # make nice plots with the indices (looks good on your presentations) # do not use the plotting tools of SALib, they are made for the method of Morris ... fig, ax = plt.subplots() fig.set_size_inches(18, 5) ax.tick_params(labelsize=18) # ===== X-AXIS ===== ax.set_xticks(np.arange(problem['num_vars'])) ax.set_xticklabels(problem['names']) ax.set_xlim(xmin=-0.5, xmax=problem['num_vars'] - 0.5) # ===== Y-AXIS ===== ax.set_ylabel('RBD-FAST\nSensitivity indices\n', fontsize=25) # ===== BARS REPRESENTING THE SENSITIVITY INDICES ===== ax.bar(np.arange(problem['num_vars']), si1, color='DarkSeaGreen'); #in striped grey : not significant indices ax.fill_between(x=[-0.5, 5.5], y1=-0.1, y2=0.1, color='grey', alpha=0.2, hatch='//', edgecolor='white'); # take a closer look to undestand interactions (looks even better on your presentations) # this part can be done without analyzing with SALib, just with the ouput from the samples fig, ax = plt.subplots() fig.set_size_inches(8, 7) ax.tick_params(labelsize=15) ax.set_title('Output of the model studied (Ishigami)\n' 'according to the value of the 2 most influent parameters', fontsize=20) # ===== SCATTER ===== size = np.ones(num_samples) * 75 sc = ax.scatter(x1, x2, c=ishigami_results, s=size, vmin=ishigami_results.min(), vmax=ishigami_results.max(), cmap='seismic', edgecolor=None) # ===== X-AXIS ===== ax.set_xlim(xmin=problem['bounds'][0][0], xmax=problem['bounds'][0][1]) ax.set_xlabel('Parameter 1', fontsize=20) # ===== Y-AXIS ===== ax.set_ylim(ymin=problem['bounds'][1][0], ymax=problem['bounds'][1][1]) ax.set_ylabel('Parameter 2', fontsize=20) # ===== COLORBAR ===== ticks = np.arange(ishigami_results.min(), ishigami_results.max(), 5) cb = plt.colorbar(sc, ticks=ticks); cb.ax.set_yticklabels([str(round(i,1)) for i in ticks]) fig.tight_layout();
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Event without running the sensitivity analysis with the analyze module, this graph shows us some strong non-linearities (where blue is very near red). This only tells us that we should study this model with more samples, 150 is not enough. Surprise 5th module Basic convergence check : NOT in SALib BUT definitely mandatory Principle Perform the SA from a sub-sample of 50, 60, 70, ... up to the total 300. We should see that the indices stabilize around a value. If not, we need more samples !
def conv_study(n, Y, X): # take n samples among the num_samples, without replacement subset = np.random.choice(num_samples, size=n, replace=False) return rbd_fast.analyze(problem=problem, Y=Y[subset], X=X[subset])['S1'] all_indices = np.array([conv_study(n=n, Y=ishigami_result, X=all_samples) for n in np.arange(50, num_samples + 1, 5)]) # convergence check fig, ax = plt.subplots() fig.set_size_inches(15,8) ax.set_title('Convergence of the sensitivity indices', fontsize=20) ax.tick_params(labelsize=16) ax.set_xlim(xmin=0, xmax=(num_samples - 50)//10) ax.set_xticks(np.arange(0,(num_samples - 50)//10 +1,5)) ax.set_xticklabels([str(i) for i in range(50,num_samples+1,50)]) ax.set_ylim(ymin=-0.15, ymax=1) for p,param in enumerate(problem['names']): ax.plot(all_indices[:,p], linewidth=3, label=param) ax.fill_between(x=[0,(num_samples - 50)//10], y1=-0.15, y2=0.1, color='grey', alpha=0.2, hatch='//', edgecolor='white', label='REMINDER : not significant') ax.legend(fontsize=20, ncol=4);
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Last but not least the BOOTSTRAP principle : select a subset of n_sub samples, with n_sub < num_samples (say on 200 over 300) and perform the sensitivity analysis on that subset. Repeat 1000 times that operation. The indices will vary : the bigger they vary the larger the influence of the samples (aka some of the samples influence greatly the outcome of the SA). The values taken by the indices are trustworthy if the indices show robustness in the bootstrap process (i.e. the "confidence intervals" are not too wide). In any case, it is a valuable information about the uncertainty around the indices.
def bootstrap(problem, Y, X): """ Calculate confidence intervals of rbd-fast indices 1000 draws returns 95% confidence intervals of the 1000 indices problem : dictionnary as SALib uses it X : SA input(s) Y : SA output(s) """ all_indices = [] for i in range(1000): X_new = np.zeros(X.shape) Y_new = np.zeros(Y.shape).flatten() # draw with replacement tirage_indices = np.random.randint(0, high=Y.shape[0], size = Y.shape[0]) for j, index in enumerate(tirage_indices): X_new[j,:] = X[index, :] Y_new[j] = Y[index] all_indices.append(rbd_fast.analyze(problem=problem, Y=Y_new, X=X_new)['S1']) means = np.array([i.mean() for i in np.array(all_indices).T]) stds = np.array([i.std() for i in np.array(all_indices).T]) return np.array([means - 2 * stds, means + 2 * stds]) # Get bootstrap confidence intervals for each index bootstrap_conf = bootstrap(problem=problem, Y=ishigami_results, X=all_samples) # make nice plots with the indices (looks good on your presentations) # do not use the plotting tools of SALib, they are made for the method of Morris ... fig, ax = plt.subplots() fig.set_size_inches(8,4) ax.tick_params(labelsize=18) # ===== X-AXIS ===== ax.set_xticks(np.arange(problem['num_vars'])) ax.set_xticklabels(problem['names']) ax.set_xlim(xmin=-0.5, xmax=problem['num_vars'] - 0.5) # ===== Y-AXIS ===== ax.set_ylabel('RBD-FAST\nSensitivity indices\n', fontsize=25) # ===== BARS REPRESENTING THE SENSITIVITY INDICES ===== ax.bar(np.arange(problem['num_vars']), si1, color='DarkSeaGreen'); # ===== LINES REPRESENTING THE BOOTSTRAP "CONFIDENCE INTERVALS" ===== for j in range(problem['num_vars']): ax.plot([j, j], [bootstrap_conf[0, j], bootstrap_conf[1, j]], 'k') #ax.fill_between(x=[-0.5, 5.5], y1=-0.1, y2=0.1, color='grey', alpha=0.2, hatch='//', edgecolor='white');
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
2.3. Fourier Series<a id='math:sec:fourier_series'></a> While Fourier series are not immediately required to understand the required calculus for this book, they are closely connected to the Fourier transform, which is an essential tool. Moreover, we noticed a few times that the principle of the harmonic analysis or harmonic decomposition is essential and, despite its simplicity, often not well understood. We hence give a very brief summary, not caring about existence questions. 2.3.1 Definition <a id='math:sec:fourier_series_definition'></a> The Fourier series of a function $f: \mathbb{R} \rightarrow \mathbb{R}$ with real coefficients is defined as <a id='math:eq:3_001'></a><!--\label{math:eq:3_001}-->$$ f_{\rm F}(x) \,=\, \frac{1}{2}c_0+\sum_{m = 1}^{\infty}c_m \,\cos(mx)+\sum_{m = 1}^{\infty}s_m \,\sin(mx), $$ with the Fourier coefficients $c_m$ and $s_m$ <a id='math:eq:3_002'></a><!--\label{math:eq:3_002}-->$$ \left( c_0 \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\,dx \right)\ c_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\,\cos(mx)\,dx \qquad m \in \mathbb{N_0}\ s_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\,\sin(mx)\,dx \qquad m \in \mathbb{N_0}.\{\rm } $$ If $f_{\rm F}$ exists, it is identical to $f$ in all points of continuity. For functions which are periodic with a period of $2\pi$ the Fourier series converges. Hence, for continuous periodic function with a period of $2\pi$ the Fourier series converges and $f_{\rm F}=f$. The Fourier series of a function $f: \mathbb{R} \rightarrow \mathbb{R}$ with complex coefficients is defined as <a id='math:eq:3_003'></a><!--\label{math:eq:3_003}-->$$ f_{\rm IF}(x) \,=\, \sum_{m = -\infty}^{\infty}a_m \,e^{\imath mx},$$ with the Fourier coefficients $a_m$ <a id='math:eq:3_004'></a><!--\label{math:eq:3_004}-->$$ a_m \,=\, \frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)e^{-\imath mx}\,dx\qquad \forall m\in\mathbb{Z}.$$ The same convergence criteria apply and one realisation can be transformed to the other. Making use of Euler's formula &#10142; <!--\ref{math:sec:eulers_formula}-->, one gets <a id='math:eq:3_005'></a><!--\label{math:eq:3_005}-->$$ \begin{split} a_m \,&=\, \frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)\,[\cos(mx)-\imath \,\sin(mx)]\,dx &=\,\left { \begin{array}{lll} \frac{1}{2} (c_m+\imath s_m) & {\rm for} & m < 0\ \frac{1}{2} c_m & {\rm for} & m = 0\ \frac{1}{2} (c_m-\imath\,s_m) & {\rm for} & m > 0\ \end{array} \right. \end{split}, $$ and accordingly, $\forall m \in \mathbb{N_0}$, <a id='math:eq:2_006'></a><!--\label{math:eq:2_006}-->$$ \ \begin{split} c_m \,&=\, a_m+a_{-m}\ s_m \,&=\, \imath\,(a_m-a_{-m})\ \end{split}. $$ The concept Fourier series can be expanded to a base interval of a period T instead of $2\pi$ by substituting $x$ with $x = \frac{2\pi}{T}t$. <a id='math:eq:3_007'></a><!--\label{math:eq:3_007}-->$$ g_{\rm F}(t) = f_{\rm F}(\frac{2\pi}{T}t) \,=\, \frac{1}{2}c_0+\sum_{m = 1}^{\infty}c_m \,\cos(m\frac{2\pi}{T}t)+\sum_{m = 1}^{\infty}s_m \,\sin(m\frac{2\pi}{T}t) $$ where <a id='math:eq:3_008'></a><!--\label{math:eq:3_008}-->$$ c_0 \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)\,dx \,=\, \frac{2}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)\,dt. \ c_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)\,\cos(m\frac{2\pi}{T}t)\,dx \,=\, \frac{2}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)\,\cos(m\frac{2\pi}{T}t)\,dt \qquad m \in \mathbb{N_0}\ s_m \,=\,\frac{1}{\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)\,\sin(m\frac{2\pi}{T}t)\,dx \,=\, \frac{2}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)\,\sin(m\frac{2\pi}{T}t)\,dt \qquad m \in \mathbb{N_0}\ $$ or <a id='math:eq:3_009'></a><!--\label{math:eq:3_010}-->$$ g_{\rm IF}(t) = f_{\rm IF}(\frac{2\pi}{T}t) \,=\, \sum_{m = -\infty}^{\infty}a_m \,e^{\imath m\frac{2\pi}{T}t} $$ <a id='math:eq:3_011'></a><!--\label{math:eq:3_011}-->$$ a_m \,=\, \frac{1}{2\pi}\int_{-\pi}^{\pi}f(\frac{2\pi}{T}t)e^{-\imath m\frac{2\pi}{T}t}\,dx\,=\,\frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}g(t)e^{-\imath m\frac{2\pi}{T}t}\,dt \qquad\forall m\in\mathbb{Z}.$$ The series again converges under the same criteria as before and the relations between the coefficients of the complex or real Fourier coefficients from equation equation <!--\ref{math:eq:3_005}-->stay valid. One nice example is the complex, scaled Fourier series of the scaled shah function &#10142; <!--\ref{math:sec:shah_function}--> $III_{T^{-1}}(x)\,=III\left(\frac{x}{T}\right)\,=\sum_{l=-\infty}^{+\infty} T \delta\left(x-l T\right)$. Obviously, the period of this function is $T$. The Fourier coefficients (matched to a period of $T$) is calculated as <a id='math:eq:3_012'></a><!--\label{math:eq:3_012}-->$$ \begin{split} a_m \,&= \,\frac{1}{T}\int_{-\frac{T}{2}}^{\frac{T}{2}}\left(\sum_{l=-\infty}^{+\infty}T\delta\left(x-l T\right)\right)e^{-\imath m \frac{2\pi}{T} x}\,dx\ &=\,\frac{1}{T} \int_{-\frac{T}{2}}^{\frac{T}{2}} T \delta\left(x\right)e^{-\imath m \frac{2\pi}{T} x}\,dx\ &=\,1 \end{split} \forall m\in\mathbb{Z}.$$ It follows that <a id='math:eq:3_013'></a><!--\label{math:eq:3_013}-->$$ \begin{split} III_{T^{-1}}(x)\,=III\left(\frac{x}{T}\right)\,=\,\sum_{m=-\infty}^{+\infty} e^{\imath m\frac{2\pi}{T} x t} \end{split} .$$ 2.3.2 Example <a id='math:sec:fourier_series_example'></a> We demonstrate how to decompose a signal into its Fourier series. An easy way to implement this numerically is to use the trapezoidal rule to approximate the integral. Thus we start by defining a function that computes the coefficients of the Fourier series using the complex definition <!--\ref{math:eq:3_011}-->
def FS_coeffs(x, m, func, T=2.0*np.pi): """ Computes Fourier series (FS) coeffs of func Input: x = input vector at which to evaluate func m = the order of the coefficient func = the function to find the FS of T = the period of func (defaults to 2 pi) """ # Evaluate the integrand am_int = func(x)*np.exp(-1j*2.0*m*np.pi*x/T) # Use trapezoidal integration to get the coefficient am = np.trapz(am_int,x) return am/T
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
That should be good enough for our purposes here. Next we create a function to sum the Fourier series.
def FS_sum(x, m, func, period=None): # If no period is specified use entire domain if period is None: period = np.abs(x.max() - x.min()) # Evaluate the coefficients and sum the series f_F = np.zeros(x.size, dtype=np.complex128) for i in range(-m,m+1): am = FS_coeffs(x, i, func, T=period) f_F += am*np.exp(2.0j*np.pi*i*x/period) return f_F
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Let's see what happens if we decompose a square wave.
# define square wave function def square_wave(x): I = np.argwhere(np.abs(x) <= 0.5) tmp = np.zeros(x.size) tmp[I] = 1.0 return tmp # Set domain and compute square wave N = 250 x = np.linspace(-1.0,1.0,N) # Compute the FS up to order m m = 10 sw_F = FS_sum(x, m, square_wave, period=2.0) # Plot result plt.figure(figsize=(15,5)) plt.plot(x, sw_F.real, 'g', label=r'$ Fourier \ series $') plt.plot(x, square_wave(x), 'b', label=r'$ Square \ wave $') plt.title(r"$FS \ decomp \ of \ square \ wave$",fontsize=20) plt.xlabel(r'$x$',fontsize=18) plt.ylim(-0.05,1.5) plt.legend() # <a id='math:fig:fou_decomp'></a><!--\label{math:fig:fou_decomp}-->
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
Figure 2.8.1: Approximating a function with a finite number of Fourier series coefficients. As can be seen from the figure, the Fourier series approximates the square wave. However at such a low order (i.e. $m = 10$) it doesn't do a very good job. Actually an infinite number of Fourier series coefficients are required to fully capture a square wave. Below is an interactive demonstration that allows you to vary the parameters on the Fourier series decomposition. Note, in particular, what happens if we make the period too small. Also feel free to apply it to functions other than the square wave (but make sure to adjust the domain accordingly.
def inter_FS(x,m,func,T): f_F = FS_sum(x, m, func, period=T) plt.plot(x,f_F.real,'b') plt.plot(x,func(x),'g') interact(lambda m,T:inter_FS(x=np.linspace(-1.0,1.0,N),m=m,func=square_wave,T=T), m=(5,100,1),T=(0,2*np.pi,0.5)) and None # <a id='math:fig:fou_decomp_inter'></a><!--\label{math:fig:fou_decomp_inter}-->
2_Mathematical_Groundwork/2_3_fourier_series.ipynb
griffinfoster/fundamentals_of_interferometry
gpl-2.0
From the Thorlabs website: https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=1000 Read in the filter curve, fc
fc = pd.read_excel("../data/FB1250-10.xlsx", sheetname='Transmission Data', parse_cols=[2,3,4], skipfooter=2) fc.tail() fc.columns
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Normalize the transmission
fc['wavelength'] = fc['Wavelength (nm)'] fc['transmission'] = fc['% Transmission']/fc['% Transmission'].max()
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Drop wavelengths shorter than 1200 nm since they are absorbed.
fc.drop(fc.index[fc.wavelength < 1150], inplace=True) sns.set_context('notebook', font_scale=1.5)
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Construct a model.
import etalon as etalon np.random.seed(78704) fc.wavelength.values n1 = etalon.sellmeier_Si(fc.wavelength.values) dsp = etalon.T_gap_Si_fast(fc.wavelength, 0.0, n1) sns.set_context('paper', font_scale=1.6) sns.set_style('ticks') model_absolute = etalon.T_gap_Si_fast(fc.wavelength, 50.0, n1) model = model_absolute/dsp plt.plot(fc.wavelength, model,label='50 nm gap spectrum') model_absolute = etalon.T_gap_Si_fast(fc.wavelength, 250.0, n1) model = model_absolute/dsp plt.plot(fc.wavelength, model,'--', label='250 nm gap spectrum') plt.fill_betweenx(fc.transmission, fc.wavelength, color='k',alpha=0.3) plt.text(1260, 0.5, 'FB1250-10', fontsize=14) #plt.plot(fc.wavelength, fc.transmission, '--',label='Filter Curve') plt.xlabel('$\lambda$ (nm)') plt.ylabel('T') plt.legend(loc='lower right') plt.xlim(1200, 1400) plt.savefig('../figs/F1250_10_filter.pdf')
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Plot the integrated flux for a variety of gap sizes. Define an integral function.
fc.transmission_norm = fc.transmission/fc.transmission.sum() integrate_flux = lambda x: (x * fc.transmission_norm).sum()
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
Small gaps.
gap_sizes = np.arange(0, 50, 2) gap_trans = [integrate_flux(etalon.T_gap_Si_fast(fc.wavelength, gap_size, n1)/dsp) for gap_size in gap_sizes] sns.set_context('paper', font_scale=1.6) sns.set_style('ticks') plt.plot(gap_sizes, gap_trans, 's', label='Integrated transmission') plt.xlabel('Gap axial extent $d$ (nm)') plt.ylabel('FB1250-10 Transmission') plt.hlines(1.0, 0, 50, label='100% transmission') plt.hlines(0.998, 0, 50, linestyle='dashed', label = '99.8% transmission') plt.legend(loc='lower left') plt.savefig('../figs/FB1250-10_integ_trans.pdf') out_tbl = pd.DataFrame({'d (nm)':gap_sizes[::4], 'FB1250-10 Transmission':gap_trans[::4]}) out_tbl['FB1250-10 Transmission'] = out_tbl['FB1250-10 Transmission'].round(3) out_tbl = out_tbl[['d (nm)','FB1250-10 Transmission']] out_tbl out_tbl.to_latex('../tbls/tbl_FB1250_raw.tex', index=False)
notebooks/SiGaps_20_Thorlabs_filter_curve.ipynb
Echelle/AO_bonding_paper
mit
2. Load SST data 2.1 Load time series SST Select the region (40°–50°N, 150°–135°W) and the period(1981-2016)
ds = xr.open_dataset('data\sst.mnmean.v5.nc') sst = ds.sst.sel(lat=slice(50, 40), lon=slice(190, 240), time=slice('1981-01-01','2015-12-31')) #sst.mean(dim='time').plot()
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.2 Calculate climatology between 1981-2010
sst_clm = sst.sel(time=slice('1981-01-01','2010-12-31')).groupby('time.month').mean(dim='time') #sst_clm = sst.groupby('time.month').mean(dim='time')
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
2.3 Calculate SSTA
sst_anom = sst.groupby('time.month') - sst_clm sst_anom_mean = sst_anom.mean(dim=('lon', 'lat'), skipna=True)
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
3. Carry out EMD analysis
S = sst_anom_mean.values t = sst.time.values # Assign EEMD to `eemd` variable eemd = EEMD() # Execute EEMD on S eIMFs = eemd.eemd(S)
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4. Visualize 4.1 Plot IMFs
nIMFs = eIMFs.shape[0] plt.figure(figsize=(11,20)) plt.subplot(nIMFs+1, 1, 1) # plot original data plt.plot(t, S, 'r') # plot IMFs for n in range(nIMFs): plt.subplot(nIMFs+1, 1, n+2) plt.plot(t, eIMFs[n], 'g') plt.ylabel("eIMF %i" %(n+1)) plt.locator_params(axis='y', nbins=5) plt.xlabel("Time [s]")
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
4.2 Error of reconstruction
reconstructed = eIMFs.sum(axis=0) plt.plot(t, reconstructed-S)
ex33-View Northeast Pacifc sea surface temperature based on an ensemble empirical mode decomposition.ipynb
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies
mit
Docstrings
A.__doc__ help(A) A.report.__doc__
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Creating an instance of a class Example of a class without repr.
class X: """Empty class.""" x = X() print(x)
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Create new instances of the class A
a0 = A('a') print(a0) a1 = A(x = 3.14) print(a1)
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Attribute access
a0.x, a1.x
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Method access
a0.report(), a1.report()
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Class inheritance
class B(A): """Derived class inherits from A.""" def report(self): """Overwrite report() method of A.""" return self.x B.__doc__
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Create new instances of class B
b0 = B(3 + 4j) b1 = B(x = a1)
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Attribute access
b0.x b1.x
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Method access
b1.report()
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
Nested attribute access
b1.x.report()
notebook/03_Classes.ipynb
cliburn/sta-663-2017
mit
2. Write a function that given a model calculates $P(B|x_1,\dots,x_n)$
def p1(model, X): ''' model: a dictionary with the model probabilities. X: a list with x_i values [x_1, x_2, ... , x_n] Returns: the probability P(B = 1 | x_1, x_2, ... , x_n) ''' return 0
exam_is.ipynb
fagonzalezo/is-2016-1
mit
3. Write a function that given a model calculates $P(A|x_1,\dots,x_n)$
def p2(model, X): ''' model: a dictionary with the model probabilities. X: a list with x_i values [x_1, x_2, ... , x_n] Returns: the probability P(A = 1 | x_1, x_2, ... , x_n) ''' return 0
exam_is.ipynb
fagonzalezo/is-2016-1
mit
4. Write a function that given a model calculates $P(A|x_1, x_n)$
def p3(model, x_1, x_n): ''' model: a dictionary with the model probabilities. x_1, x_n: x values Returns: the probability P(A = 1 | x_1, x_n) ''' return 0
exam_is.ipynb
fagonzalezo/is-2016-1
mit
Total notifications This data covers all the users who received at least one notification during the month, whether they actually visited the site during month or not, so we'd expect that the numbers are dominated by a large bulk of users with very few notifications, and that there's a long tail of very few users with a very large number of notifications. But let's characterize that a bit. At each wiki, how many and what percent of users with any notifications got fewer than 5?
def beyond_threshold(df, wikis, threshold, direction): columns = [ "wiki", "users", "% of users", "% of notifications" ] results = [] for wiki in wikis: by_wiki = filter_by_wiki(df, wiki) total_users = by_wiki.iloc[:, 1].sum() total_notifs = 0 for row in by_wiki.iterrows(): total_notifs += row[1][0] * row[1][1] if direction == "under": beyond_threshold = by_wiki[ by_wiki.iloc[:, 0] < threshold ] elif direction == "over": beyond_threshold = by_wiki[ by_wiki.iloc[:, 0] > threshold ] users_beyond_threshold = beyond_threshold.iloc[:, 1].sum() notifs_beyond_threshold = 0 for row in beyond_threshold.iterrows(): notifs_beyond_threshold += row[1][0] * row[1][1] user_proportion = users_beyond_threshold / total_users notifs_proportion = notifs_beyond_threshold / total_notifs results.append([ wiki, users_beyond_threshold, round(user_proportion * 100, 1), round(notifs_proportion * 100, 1) ]) results = pd.DataFrame(results, columns=columns) return results beyond_threshold( notifs, wikis, 5, "under")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
5 or more?
beyond_threshold(notifs, wikis, 4, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
And what percent of users got 25 notifications or more—becoming more or less "daily notified"?
beyond_threshold(notifs, wikis, 24, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
That's lower than I expected at English Wikipedia. It only had about 1,200 users with at least 30 notifications per month, compared to 3,500 highly active users (100+ edits) per month. However, both Flow wikis have higher percentages than the non-Flow wikis. Now, let's look at the actual distributions. To make it easier to comprehend, I'll cut off the 90%+ of users with fewer than 5 notifications. I'll also cut off the users with 100 or more. How many is that?
beyond_threshold(notifs, wikis, 99, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
Graphs
fig, axarr = plt.subplots( 5, 1, figsize=(12,30) ) fig.suptitle("Total notifications per user", fontsize=24) fig.subplots_adjust(top=0.95) i = 0 for wiki in wikis: plot_by_wiki(notifs, wiki, ax = axarr[i]) i = i + 1
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
So, as expected, all the wikis have a pretty regular power-law distribution of notifications. Unread notifications First, the counts and percentages for various levels of unread notifications. Under 5
beyond_threshold(unreads, wikis, 5, "under")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
5 or more
beyond_threshold(unreads, wikis, 4, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
25 or more
beyond_threshold(unreads, wikis, 24, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
100 or more
beyond_threshold(unreads, wikis, 99, "over")
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
Histograms
fig, axarr = plt.subplots( 5, 1, figsize=(12,30) ) fig.suptitle("Unread notifications per user", fontsize=24) fig.subplots_adjust(top=0.95) i = 0 for wiki in wikis: plot_by_wiki(unreads, wiki, ax = axarr[i]) i = i + 1
Notifications research.ipynb
neilpquinn/2016-02-notifications-exploration
mit
Preparing the Dataset Load dataset from tab-seperated text file Dataset contains three columns: feature 1, feature 2, and class labels Dataset contains 100 entries sorted by class labels, 50 examples from each class
data = np.genfromtxt('perceptron_toydata.txt', delimiter='\t') X, y = data[:, :2], data[:, 2] y = y.astype(np.int) print('Class label counts:', np.bincount(y)) plt.scatter(X[y==0, 0], X[y==0, 1], label='class 0', marker='o') plt.scatter(X[y==1, 0], X[y==1, 1], label='class 1', marker='s') plt.xlabel('feature 1') plt.ylabel('feature 2') plt.legend() plt.show()
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Shuffle dataset Split dataset into 70% training and 30% test data Seed random number generator for reproducibility
shuffle_idx = np.arange(y.shape[0]) shuffle_rng = np.random.RandomState(123) shuffle_rng.shuffle(shuffle_idx) X, y = X[shuffle_idx], y[shuffle_idx] X_train, X_test = X[shuffle_idx[:70]], X[shuffle_idx[70:]] y_train, y_test = y[shuffle_idx[:70]], y[shuffle_idx[70:]]
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Standardize training and test datasets (mean zero, unit variance)
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0) X_train = (X_train - mu) / sigma X_test = (X_test - mu) / sigma
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Check dataset (here: training dataset) after preprocessing steps
plt.scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o') plt.scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s') plt.xlabel('feature 1') plt.ylabel('feature 2') plt.legend() plt.show()
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Implementing a Perceptron in NumPy Implement function for perceptron training in NumPy
def perceptron_train(features, targets, mparams=None, zero_weights=True, learning_rate=1., seed=None): """Perceptron training function for binary class labels Parameters ---------- features : numpy.ndarray, shape=(n_samples, m_features) A 2D NumPy array containing the training examples targets : numpy.ndarray, shape=(n_samples,) A 1D NumPy array containing the true class labels mparams : dict or None (default: None) A dictionary containing the model parameters, for instance as returned by this function. If None, a new model parameter dictionary is initialized. Note that the values in mparams are updated inplace if a mparams dict is provided. zero_weights : bool (default: True) Initializes weights to all zeros, otherwise model weights are initialized to small random number from a normal distribution with mean zero and standard deviation 0.1. learning_rate : float (default: 1.0) A learning rate for the parameter updates. Note that a learning rate has no effect on the direction of the decision boundary if if the model weights are initialized to all zeros. seed : int or None (default: None) Seed for the pseudo-random number generator that initializes the weights if zero_weights=False Returns ------- mparams : dict The model parameters after training the perceptron for one epoch. The mparams dictionary has the form: {'weights': np.array([weight_1, weight_2, ... , weight_m]), 'bias': np.array([bias])} """ # initialize model parameters if mparams is None: mparams = {'bias': np.zeros(1)} if zero_weights: mparams['weights'] = np.zeros(features.shape[1]) else: rng = np.random.RandomState(seed) mparams['weights'] = rng.normal(loc=0.0, scale=0.1, size=(features.shape[1])) # train one epoch for training_example, true_label in zip(features, targets): linear = np.dot(training_example, mparams['weights']) + mparams['bias'] # if class 1 was predicted but true label is 0 if linear > 0. and not true_label: mparams['weights'] -= learning_rate * training_example mparams['bias'] -= learning_rate * 1. # if class 0 was predicted but true label is 1 elif linear <= 0. and true_label: mparams['weights'] += learning_rate * training_example mparams['bias'] += learning_rate * 1. return mparams
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Train the perceptron for 2 epochs
model_params = perceptron_train(X_train, y_train, mparams=None, zero_weights=True) for _ in range(2): _ = perceptron_train(X_train, y_train, mparams=model_params)
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Implement a function for perceptron predictions in NumPy
def perceptron_predict(features, mparams): """Perceptron prediction function for binary class labels Parameters ---------- features : numpy.ndarray, shape=(n_samples, m_features) A 2D NumPy array containing the training examples mparams : dict The model parameters aof the perceptron in the form: {'weights': np.array([weight_1, weight_2, ... , weight_m]), 'bias': np.array([bias])} Returns ------- predicted_labels : np.ndarray, shape=(n_samples) NumPy array containing the predicted class labels. """ linear = np.dot(features, mparams['weights']) + mparams['bias'] predicted_labels = np.where(linear.reshape(-1) > 0., 1, 0) return predicted_labels
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Compute training and test error
train_errors = np.sum(perceptron_predict(X_train, model_params) != y_train) test_errors = np.sum(perceptron_predict(X_test, model_params) != y_test) print('Number of training errors', train_errors) print('Number of test errors', test_errors)
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Visualize the decision boundary Perceptron is a linear function with threshold $$w_{1}x_{1} + w_{2}x_{2} + b \geq 0.$$ We can rearrange this equation as follows: $$w_{1}x_{1} + b \geq 0 - w_{2}x_{2}$$ $$- \frac{w_{1}x_{1}}{{w_2}} - \frac{b}{w_2} \leq x_{2}$$
x_min = -2 y_min = ( -(model_params['weights'][0] * x_min) / model_params['weights'][1] -(model_params['bias'] / model_params['weights'][1]) ) x_max = 2 y_max = ( -(model_params['weights'][0] * x_max) / model_params['weights'][1] -(model_params['bias'] / model_params['weights'][1]) ) fig, ax = plt.subplots(1, 2, sharex=True, figsize=(7, 3)) ax[0].plot([x_min, x_max], [y_min, y_max]) ax[1].plot([x_min, x_max], [y_min, y_max]) ax[0].scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o') ax[0].scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s') ax[1].scatter(X_test[y_test==0, 0], X_test[y_test==0, 1], label='class 0', marker='o') ax[1].scatter(X_test[y_test==1, 0], X_test[y_test==1, 1], label='class 1', marker='s') ax[1].legend(loc='upper left') plt.show()
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Suggested exercises Train a zero-weight perceptron with different learning rates and compare the model parameters and decision boundaries to each other. What do you observe? Repeat the previous exercise with randomly initialized weights.
# %load solutions/01_weight_zero_learning_rate.py # %load solutions/02_random_weights_learning_rate.py
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Implementing a Perceptron in TensorFlow Setting up the perceptron graph
g = tf.Graph() n_features = X_train.shape[1] with g.as_default() as g: # initialize model parameters features = tf.placeholder(dtype=tf.float32, shape=[None, n_features], name='features') targets = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='targets') params = { 'weights': tf.Variable(tf.zeros(shape=[n_features, 1], dtype=tf.float32), name='weights'), 'bias': tf.Variable([[0.]], dtype=tf.float32, name='bias')} # forward pass linear = tf.matmul(features, params['weights']) + params['bias'] ones = tf.ones(shape=tf.shape(linear)) zeros = tf.zeros(shape=tf.shape(linear)) prediction = tf.where(tf.less(linear, 0.), zeros, ones, name='prediction') # weight update diff = targets - prediction weight_update = tf.assign_add(params['weights'], tf.reshape(diff * features, (n_features, 1))) bias_update = tf.assign_add(params['bias'], diff) saver = tf.train.Saver()
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Training the perceptron for 5 training samples for illustration purposes
with tf.Session(graph=g) as sess: sess.run(tf.global_variables_initializer()) i = 0 for example, target in zip(X_train, y_train): feed_dict = {features: example.reshape(-1, n_features), targets: target.reshape(-1, 1)} _, _ = sess.run([weight_update, bias_update], feed_dict=feed_dict) i += 1 if i >= 4: break modelparams = sess.run(params) print('Model parameters:\n', modelparams) saver.save(sess, save_path='perceptron') pred = sess.run(prediction, feed_dict={features: X_train}) errors = np.sum(pred.reshape(-1) != y_train) print('Number of training errors:', errors)
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Continue training of the graph after restoring the session from a local checkpoint (this can be useful if we have to interrupt out computational session) Now train a complete epoch
with tf.Session(graph=g) as sess: saver.restore(sess, os.path.abspath('perceptron')) for epoch in range(1): for example, target in zip(X_train, y_train): feed_dict = {features: example.reshape(-1, n_features), targets: target.reshape(-1, 1)} _, _ = sess.run([weight_update, bias_update], feed_dict=feed_dict) modelparams = sess.run(params) saver.save(sess, save_path='perceptron') pred = sess.run(prediction, feed_dict={features: X_train}) train_errors = np.sum(pred.reshape(-1) != y_train) pred = sess.run(prediction, feed_dict={features: X_train}) test_errors = np.sum(pred.reshape(-1) != y_train) print('Number of training errors', train_errors) print('Number of test errors', test_errors)
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Suggested Exercises 3) Plot the decision boundary for this TensorFlow perceptron. Why do you think the TensorFlow implementation performs better than our NumPy implementation on the test set? - Hint 1: you can re-use the code that we used in the NumPy section - Hint 2: since the bias is a 2D array, you need to access the float value via modelparams['bias'][0]
# %load solutions/03_tensorflow-boundary.py
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Theoretically, we could restart the Jupyter notebook now (we would just have to prepare the dataset again then, though) We are going to restore the session from a meta graph (notice "tf.Session()") First, we have to load the datasets again
with tf.Session() as sess: saver = tf.train.import_meta_graph(os.path.abspath('perceptron.meta')) saver.restore(sess, os.path.abspath('perceptron')) pred = sess.run('prediction:0', feed_dict={'features:0': X_train}) train_errors = np.sum(pred.reshape(-1) != y_train) pred = sess.run('prediction:0', feed_dict={'features:0': X_test}) test_errors = np.sum(pred.reshape(-1) != y_test) print('Number of training errors', train_errors) print('Number of test errors', test_errors)
machinelearning/deep-learning-book/code/ch02_perceptron/ch02_perceptron.ipynb
othersite/document
apache-2.0
Note you also have access to a quicker shortcut for adding weight to a layer: the add_weight() method:
# TODO # Use `add_weight()` method for adding weight to a layer class Linear(keras.layers.Layer): def __init__(self, units=32, input_dim=32): super(Linear, self).__init__() self.w = self.add_weight( shape=(input_dim, units), initializer="random_normal", trainable=True ) self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b x = tf.ones((2, 2)) linear_layer = Linear(4, 2) y = # TODO: Your code goes here print(y)
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In many cases, you may not know in advance the size of your inputs, and you would like to lazily create weights when that value becomes known, some time after instantiating the layer. In the Keras API, we recommend creating layer weights in the build(self, input_shape) method of your layer. Like this:
# TODO class Linear(keras.layers.Layer): def __init__(self, units=32): super(Linear, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=# TODO: Your code goes here, initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Layers are recursively composable If you assign a Layer instance as an attribute of another Layer, the outer layer will start tracking the weights of the inner layer. We recommend creating such sublayers in the __init__() method (since the sublayers will typically have a build method, they will be built when the outer layer gets built).
# TODO # Let's assume we are reusing the Linear class # with a `build` method that we defined above. class MLPBlock(keras.layers.Layer): def __init__(self): super(MLPBlock, self).__init__() self.linear_1 = Linear(32) self.linear_2 = Linear(32) self.linear_3 = Linear(1) def call(self, inputs): x = self.linear_1(inputs) x = tf.nn.relu(x) x = self.linear_2(x) x = tf.nn.relu(x) return self.linear_3(x) mlp = # TODO: Your code goes here y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights print("weights:", len(mlp.weights)) print("trainable weights:", len(mlp.trainable_weights))
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
These losses (including those created by any inner layer) can be retrieved via layer.losses. This property is reset at the start of every __call__() to the top-level layer, so that layer.losses always contains the loss values created during the last forward pass.
# TODO class OuterLayer(keras.layers.Layer): def __init__(self): super(OuterLayer, self).__init__() self.activity_reg = # TODO: Your code goes here def call(self, inputs): return self.activity_reg(inputs) layer = OuterLayer() assert len(layer.losses) == 0 # No losses yet since the layer has never been called _ = layer(tf.zeros(1, 1)) assert len(layer.losses) == 1 # We created one loss value # `layer.losses` gets reset at the start of each __call__ _ = layer(tf.zeros(1, 1)) assert len(layer.losses) == 1 # This is the loss created during the call above
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The add_metric() method Similarly to add_loss(), layers also have an add_metric() method for tracking the moving average of a quantity during training. Consider the following layer: a "logistic endpoint" layer. It takes as inputs predictions & targets, it computes a loss which it tracks via add_loss(), and it computes an accuracy scalar, which it tracks via add_metric().
# TODO class LogisticEndpoint(keras.layers.Layer): def __init__(self, name=None): super(LogisticEndpoint, self).__init__(name=name) self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) self.accuracy_fn = keras.metrics.BinaryAccuracy() def call(self, targets, logits, sample_weights=None): # Compute the training-time loss value and add it # to the layer using `self.add_loss()`. loss = self.loss_fn(targets, logits, sample_weights) self.add_loss(loss) # Log accuracy as a metric and add it # to the layer using `self.add_metric()`. acc = # TODO: Your code goes here # Return the inference-time prediction tensor (for `.predict()`). return tf.nn.softmax(logits)
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
You can optionally enable serialization on your layers If you need your custom layers to be serializable as part of a Functional model, you can optionally implement a get_config() method:
# TODO class Linear(keras.layers.Layer): def __init__(self, units=32): super(Linear, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight( shape=(input_shape[-1], self.units), initializer="random_normal", trainable=True, ) self.b = self.add_weight( shape=(self.units,), initializer="random_normal", trainable=True ) def call(self, inputs): return tf.matmul(inputs, self.w) + self.b def get_config(self): return {"units": self.units} # You can enable serialization on your layers using `get_config()` method # Now you can recreate the layer from its config: layer = Linear(64) config = # TODO: Your code goes here print(config) new_layer = Linear.from_config(config)
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/custom_layers_and_models.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The paragraph below was lifted from #2 and describes how the humanized gene(s) were made with a series of overlapping oligonucelotides. Two primers are described that were used to amplify the synthetic gene in order to clone it in the pBS vector. We can assume that the GFP sequence is identical to #3. The final product is the pBS_GFPH1 vector. Synthesis of the gfph cDNA. The gfph cDNA was synthesized by assembling mutually priming synthetic oligonucleotides (Fig. 1). The gene was divided into eight segments of approximately equal length, and four pairs of oligonucleotides were synthesized, each pair consisting of two overlapping oligonucleotides with a short stretch of overlap (underlined in Fig. 1), one coding for the sense strand and the other coding for the antisense strand. After annealing and extension with Sequenase, pairs 1 and 2 were digested with EaeI, whereas pairs 3 and 4 were digested with BamHI. The digested products were then ligated in two separate reactions: pairs 1 and 2 and pairs 3 and 4. Ligation products of the desired length were purified on a 5% polyacrylamide gel under nondenaturing conditions. Both DNA fragments were then digested with EcoRII and ligated to each other. The final product was amplified by PCR, using a pair of oligonucleotides partially complementary to the gfph cDNA (boldface in the sequences presented below) and containing the restriction sites NotI, XbaI, and HindIII (underlined in the sequences presented below) for cloning. The sequence of the upstream primer, which included a Kozak consensus sequence (18), was 59-TGCTCTAGAGCG GCCGCCGCCACCATGAGCAAGGGCGAGGAACTG-39; the downstream primer sequence was 59-CGGAAGCTTGCGGCCGCTCACTTGTACAGCTCGTCCAT- 39. After digestion of the PCR product with XbaI and HindIII, the DNA fragment was cloned into pBS(1) (Stratagene) and sequenced. Several independent clones were isolated and sequenced. These clones had mutations in the coding sequence which presumably either occurred during PCR amplification or were present in the oligonucleotides. Portions of these clones were then spliced together to produce the final gfph gene that encoded a wild-type amino acid sequence. The resulting construct, designated pBS-GFPH1, contained the coding sequence for wild-type GFP. Two mutants were constructed in the pBS-GFPH background by site-directed PCR mutagenesis. One of these converted Ser-65 to Thr and was called pBS-GFPH2; the other converted Tyr-66 to His and was called pBSGFPHB. pydna is used to describe the cloning process.
from pydna.genbank import Genbank from pydna.parsers import parse_primers from pydna.amplify import pcr from pydna.readers import read from pydna.gel import Gel
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
we get the gfp gene from Genbank according to #2
gb = Genbank("bjornjobb@gmail.com") humanized_gfp_gene = gb.nucleotide('U50963')
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The upstream and downstream primers were described in #2
up, dp = parse_primers(''' >upstream_primer TGCTCTAGAGCGGCCGCCGCCACCATGAGCAAGGGCGAGGAACTG >downstream_primer CGGAAGCTTGCGGCCGCTCACTTGTACAGCTCGTCCAT''') humanized_gfp_product = pcr(up, dp, humanized_gfp_gene) humanized_gfp_product humanized_gfp_product.figure()
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The PCR product contain the entire GFP coding sequence as expected. The PCR product was digested with XbaI and HindIII. We import the restriction enzymes from BioPython:
from Bio.Restriction import XbaI, HindIII stuffer, gene_fragment, stuffer = humanized_gfp_product.cut(XbaI, HindIII) gene_fragment
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The gene fragment has the expected size and sticky ends:
gene_fragment.seq
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The pBS(+) plasmid is also known as [BlueScribe](https://www.snapgene.com/resources/plasmid_files/basic_cloning_vectors/pBS(+) which is available from Genbank under L08783.
pBSplus = gb.nucleotide("L08783") stuffer, pBS_lin = pBSplus.cut(XbaI, HindIII) stuffer, pBS_lin
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
A small 28 bp stuffer fragment is lost upon digestion:
pBS_GFPH1 = (pBS_lin+gene_fragment).looped()
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The pBS_GFPH1 plasmid is 3926 bp long
pBS_GFPH1
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The second paragraph in the Materials section in #2 is harder to follow. The first part describes the construction of a vector with wild type GFP which is substituted for the humanized GFP in the end. Plasmids referenced are: 1. TU#65 2. pCMVb 3. pRc/CMV 4. pTRBR The TU#65 plasmid is described in: Chalfie, M., Y. Tu, G. Euskirchen, W. W. Ward, and D. C. Prasher. 1994. Green fluorescent protein as a marker for gene expression. Science 263:802– 805. The paper above says very little about the construction or sequence of the TU#65. The pTRBR plasmid is described in: Ryan, J. H., Zolotukhin, S., and Muzyczka, N. (1996) Sequence requirements for binding of Rep68 to the adeno-associated virus terminal repeats. J. Virol. 70, 1542–1553. The construction of pTRBR has more details, but refer to sequences in older papers. Another problem is that the sequences for PCR primers described below were not given in the publication. This makes is almost impossible to follow the cloning strategy without additional information. What we can say for sure is that the NotI cassette in pBS_GFPH1 containing the humanized GFP is identical to the one in the final pGreen Lantern-1 sequence (see last emphasized text in paragraph below). Construction of rAAV vector plasmids. Briefly, the gfp10 sequence was subcloned into the NotI site of pCMVb (Clontech) after digestion of the parent plasmid TU#65 (4) with AgeI and EcoRI, filling in the ends with Klenow DNA polymerase, and adding NotI linkers. The resulting plasmid, designated pCMV green, was then used as a template to amplify in a PCR the transcription cassette containing the cytomegalovirus (CMV) promoter, the simian virus 40 (SV40) intron, the gfp10 cDNA, and the SV40 polyadenylation signal. The upstream PCR primer complementary to the CMV promoter also included an overhang that contained the BglII, EcoRI, and KpnI sites. The downstream PCR primer,complementary to the polyadenylation signal, included a SalI site overhang. The polyadenylation signal of the bovine growth hormone gene was amplified in another PCR using plasmid pRc/CMV (Invitrogen) as the template. The upstream primer in this reaction contained a SalI site overhang, and the downstream primer contained a BglII site. After purification of the PCR products on a 1% agarose gel, the respective fragments were digested with SalI and ligated to each other via the exposed SalI ends. The ligation product was gel purified and digested with BglII. The 160-bp BglII-PstI fragment, containing the AAV terminal repeat, was isolated by gel purification from plasmid pTRBR(1) (30). This fragment had been subcloned into pTRBR(1) from the previously described plasmid dl3-94 (26). It was then ligated to both ends of the BglII-digested cassette, containing the CMV promoter, SV40 intron, gfp10 cDNA, SV40 poly (A) site, and bovine growth hormone poly(A) site. The ligation product was then cut with PstI and subcloned into plasmid pBS(1) (Stratagene), which had been modified by converting the PvuII sites at positions 766 and 1148 into PstI sites by adding PstI linkers and deleting the internal 382-bp fragment, containing the polylinker region. The resulting plasmid was designated pTRgreen. The neomycin resistance gene (neo) cassette, driven by the herpes simplex virus thymidine kinase gene promoter and the enhancer from polyomavirus, was obtained from plasmid pMClneo (Stratagene) by cutting the plasmid with XhoI, filling in the ends with Klenow DNA polymerase, adding SalI linkers, and digesting with SalI. The DNA fragment containing the neo cassette was gel purified and subcloned into the SalI site of pTRgreen. The resulting construct, pTRBS-UF (UF for user friendly), is depicted in Fig. 2. To construct pTRBS-UF1, pTRBS-UF2, or pTRBSUFB, we substituted the NotI fragment of pBS-GFPH1 (wild type), pBS-GFPH2 (Thr-65), or pBS-GFPHB (His-66), respectively, for the NotI fragment of pTRBS-UF (Fig. 2). Any DNA fragment that had undergone PCR amplification was sequenced to confirm the identity of the original sequence. 2nd attempt A google search for pGreen Lantern-1 sequence revealed that there is a plasmid deposited at Addgene where the depositors claim that the backbone in pGreen Lantern-1. This plasmid is called pGL-MLKif3B. The construction of this vector is described in: Ginkel, L. M., and Wordeman, L. (2000) Expression and partial characterization of kinesin-related proteins in differentiating and adult skeletal muscle. Mol. Biol. Cell 11, 4143–4158. here The paragraph below was lifted from Ginkel et al.: Expression ConstructsThe GFP-KIF3B-motorless deletion construct (GFP-KIF3B-ML) was made by modifying GFP-MCAK (Maneyet al., 1998) in pOPRSVI-CAT (Stratagene). Briefly, by using aNdeI site inserted at the junction of the GFP and MCAK coding regions by site-directed mutagenesis (QuikChange site-directed mutagenesis kit; Stratagene), MCAK was removed by NdeI-XhoI digestion. Before making the GFP-KIF3B-ML, a GFP-KIF3B-tail construct was made. The NdeI/XhoI fragment of KIF3B used to make the bacterial expression construct was inserted into NdeI-XhoI sites of the prepared vector. To make GFP-KIF3B-ML, the incorporated NdeI site was removed and replaced with an AvrII site (QuikChange site-directed mutagenesis kit; Stratagene). The fragment of KIF3B corresponding to the coiled-coil plus part of the tail domain (nucleotides 1154 –1888,amino acids 364 – 609) was generated by PCR from the isolatedpBluescript II-KIF3B. AvrII and BbsI sites, incorporated into the 59and 39PCR primers, respectively, were used to insert the fragment into the AvrII site at the GFP junction and the unique BbsI site within the tail domain. The resulting GFP-KIF3B-ML coding region was removed from pOPRSVICAT with NotI and inserted into pGREEN-LANTERN-1 (Life Technologies, Rockville, MD) due to increased expression in C2C12 cells. The important part is the final one wich indicate that we can recreate the pGreen Lantern-1 sequence by removing the NotI insert and replacing it with the original insert. We know that the NotI insert of pGreen Lantern-1 is the same as for the pBS_GFPH1 vector wich we have. We will use this alternative shortcut to try to recreate the sequence: The pGL-MLKif3B plasmid is difficult to read programatically, here we use the requests and the lxml libraries:
import requests from lxml import html r = requests.get('https://www.addgene.org/13744/sequences/') tree = html.fromstring(r.text) rawdata_addgene_full_sequence = tree.xpath(".//*[@id='depositor-full']") pGL_MLKif3B = read( rawdata_addgene_full_sequence[0].text_content() ).looped() rawdata_addgene_partial_sequence = tree.xpath(".//*[@id='addgene-partial']") pGL_MLKif3B_partial = read( rawdata_addgene_partial_sequence[0].text_content() ) assert len(pGL_MLKif3B) == 6210
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The sequence seems to have the correct size:
pGL_MLKif3B
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
We cut out the NotI fragment
from Bio.Restriction import NotI Kif3b_GFP, pGL_backbone = pGL_MLKif3B.cut(NotI) Kif3b_GFP, pGL_backbone
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The remaining backbone sequence is 4304 bp. We cut out the NotI GFP cassette from pBS_GFPH1
humanized_gfp_NotI_frag, pBS_bb = pBS_GFPH1.cut(NotI) humanized_gfp_NotI_frag, pBS_bb
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
Then we combine the backbone from pGL_MLKif3B (4304) with the insert from pBS_GFPH1 (736)
pGreenLantern1 = (pGL_backbone + humanized_gfp_NotI_frag).looped()
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
the sequence seems to have roughly the correct size (5kb):
pGreenLantern1 pGreenLantern1.locus="pGreenLantern1"
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The candidate for the pGreenLantern1 sequence can be downloaded from the link below.
pGreenLantern1.write("pGreenLantern1.gb")
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The plasmid map below is from the patent (#4)
from IPython.display import Image Image("https://patentimages.storage.googleapis.com/US6638732B1/US06638732-20031028-D00005.png", width=300)
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
The map below was made with plasmapper and it corresponds roughly to the map from the patent above.
Image("plasMap203_1479761881670.png", width=600)
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
Check sequence by restriction digest If the actual plasmid is available, the sequence assembled here can be compared by restriction analysis to the acutal vector. NdeI cuts the sequence three times prodcing fragments that are easy to distinguish by gel.
from Bio.Restriction import NdeI fragments = pGreenLantern1.cut(NdeI) fragments from pydna.gel import weight_standard_sample #PYTEST_VALIDATE_IGNORE_OUTPUT %matplotlib inline gel=Gel([weight_standard_sample('1kb+_GeneRuler'), fragments]) gel.run() Image("http://static.wixstatic.com/media/5be0cc_b35636c46e654d8b8c09e8cf17ad13aa.jpg/v1/fill/w_281,h_439,al_c,q_80,usm_0.66_1.00_0.01/5be0cc_b35636c46e654d8b8c09e8cf17ad13aa.jpg")
notebooks/pGreenLantern1/pGreenLantern1.ipynb
BjornFJohansson/pydna-examples
bsd-3-clause
Next, we download a data file containing spike train data from multiple trials of two neurons.
# Download data !wget -Nq https://github.com/INM-6/elephant-tutorial-data/raw/master/dataset-1/dataset-1.h5
doc/tutorials/unitary_event_analysis.ipynb
apdavison/elephant
bsd-3-clause
Write a plotting function
def plot_UE(data,Js_dict,Js_sig,binsize,winsize,winstep, pat,N,t_winpos,**kwargs): """ Examples: --------- dict_args = {'events':{'SO':[100*pq.ms]}, 'save_fig': True, 'path_filename_format':'UE1.pdf', 'showfig':True, 'suptitle':True, 'figsize':(12,10), 'unit_ids':[10, 19, 20], 'ch_ids':[1,3,4], 'fontsize':15, 'linewidth':2, 'set_xticks' :False'} 'marker_size':8, """ import matplotlib.pylab as plt t_start = data[0][0].t_start t_stop = data[0][0].t_stop arg_dict = {'events':{},'figsize':(12,10), 'top':0.9, 'bottom':0.05, 'right':0.95,'left':0.1, 'hspace':0.5,'wspace':0.5,'fontsize':15,'unit_ids':range(1,N+1,1), 'ch_real_ids':[],'showfig':False, 'lw':2,'S_ylim':[-3,3], 'marker_size':8, 'suptitle':False, 'set_xticks':False, 'save_fig':False,'path_filename_format':'UE.pdf'} arg_dict.update(kwargs) num_tr = len(data) unit_real_ids = arg_dict['unit_ids'] num_row = 5 num_col = 1 ls = '-' alpha = 0.5 plt.figure(1,figsize = arg_dict['figsize']) if arg_dict['suptitle'] == True: plt.suptitle("Spike Pattern:"+ str((pat.T)[0]),fontsize = 20) print('plotting UEs ...') plt.subplots_adjust(top=arg_dict['top'], right=arg_dict['right'], left=arg_dict['left'] , bottom=arg_dict['bottom'], hspace=arg_dict['hspace'] , wspace=arg_dict['wspace']) ax = plt.subplot(num_row,1,1) ax.set_title('Unitary Events',fontsize = arg_dict['fontsize'],color = 'r') for n in range(N): for tr,data_tr in enumerate(data): plt.plot(data_tr[n].rescale('ms').magnitude, np.ones_like(data_tr[n].magnitude)*tr + n*(num_tr + 1) + 1, '.', markersize=0.5,color = 'k') sig_idx_win = np.where(Js_dict['Js']>= Js_sig)[0] if len(sig_idx_win)>0: x = np.unique(Js_dict['indices']['trial'+str(tr)]) if len(x)>0: xx = [] for j in sig_idx_win: xx =np.append(xx,x[np.where((x*binsize>=t_winpos[j]) &(x*binsize<t_winpos[j] + winsize))]) plt.plot( np.unique(xx)*binsize, np.ones_like(np.unique(xx))*tr + n*(num_tr + 1) + 1, ms=arg_dict['marker_size'], marker = 's', ls = '',mfc='none', mec='r') plt.axhline((tr + 2)*(n+1) ,lw = 2, color = 'k') y_ticks_pos = np.arange(num_tr/2 + 1,N*(num_tr+1), num_tr+1) plt.yticks(y_ticks_pos) plt.gca().set_yticklabels(unit_real_ids,fontsize = arg_dict['fontsize']) for ch_cnt, ch_id in enumerate(arg_dict['ch_real_ids']): print(ch_id) plt.gca().text((max(t_winpos) + winsize).rescale('ms').magnitude, y_ticks_pos[ch_cnt],'CH-'+str(ch_id),fontsize = arg_dict['fontsize']) plt.ylim(0, (tr + 2)*(n+1) + 1) plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude) plt.xticks([]) plt.ylabel('Unit ID',fontsize = arg_dict['fontsize']) for key in arg_dict['events'].keys(): for e_val in arg_dict['events'][key]: plt.axvline(e_val,ls = ls,color = 'r',lw = 2,alpha = alpha) if arg_dict['set_xticks'] == False: plt.xticks([]) print('plotting Raw Coincidences ...') ax1 = plt.subplot(num_row,1,2,sharex = ax) ax1.set_title('Raw Coincidences',fontsize = 20,color = 'c') for n in range(N): for tr,data_tr in enumerate(data): plt.plot(data_tr[n].rescale('ms').magnitude, np.ones_like(data_tr[n].magnitude)*tr + n*(num_tr + 1) + 1, '.', markersize=0.5, color = 'k') plt.plot( np.unique(Js_dict['indices']['trial'+str(tr)])*binsize, np.ones_like(np.unique(Js_dict['indices']['trial'+str(tr)]))*tr + n*(num_tr + 1) + 1, ls = '',ms=arg_dict['marker_size'], marker = 's', markerfacecolor='none', markeredgecolor='c') plt.axhline((tr + 2)*(n+1) ,lw = 2, color = 'k') plt.ylim(0, (tr + 2)*(n+1) + 1) plt.yticks(np.arange(num_tr/2 + 1,N*(num_tr+1), num_tr+1)) plt.gca().set_yticklabels(unit_real_ids,fontsize = arg_dict['fontsize']) plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude) plt.xticks([]) plt.ylabel('Unit ID',fontsize = arg_dict['fontsize']) for key in arg_dict['events'].keys(): for e_val in arg_dict['events'][key]: plt.axvline(e_val,ls = ls,color = 'r',lw = 2,alpha = alpha) print('plotting PSTH ...') plt.subplot(num_row,1,3,sharex=ax) #max_val_psth = 0.*pq.Hz for n in range(N): plt.plot(t_winpos + winsize/2.,Js_dict['rate_avg'][:,n].rescale('Hz'), label = 'unit '+str(arg_dict['unit_ids'][n]),lw = arg_dict['lw']) plt.ylabel('Rate [Hz]',fontsize = arg_dict['fontsize']) plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude) max_val_psth = plt.gca().get_ylim()[1] plt.ylim(0, max_val_psth) plt.yticks([0, int(max_val_psth/2),int(max_val_psth)],fontsize = arg_dict['fontsize']) plt.legend(bbox_to_anchor=(1.12, 1.05), fancybox=True, shadow=True) for key in arg_dict['events'].keys(): for e_val in arg_dict['events'][key]: plt.axvline(e_val,ls = ls,color = 'r',lw = arg_dict['lw'],alpha = alpha) if arg_dict['set_xticks'] == False: plt.xticks([]) print( 'plotting emp. and exp. coincidences rate ...') plt.subplot(num_row,1,4,sharex=ax) plt.plot(t_winpos + winsize/2.,Js_dict['n_emp'],label = 'empirical',lw = arg_dict['lw'],color = 'c') plt.plot(t_winpos + winsize/2.,Js_dict['n_exp'],label = 'expected',lw = arg_dict['lw'],color = 'm') plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude) plt.ylabel('# Coinc.',fontsize = arg_dict['fontsize']) plt.legend(bbox_to_anchor=(1.12, 1.05), fancybox=True, shadow=True) YTicks = plt.ylim(0,int(max(max(Js_dict['n_emp']), max(Js_dict['n_exp'])))) plt.yticks([0,YTicks[1]],fontsize = arg_dict['fontsize']) for key in arg_dict['events'].keys(): for e_val in arg_dict['events'][key]: plt.axvline(e_val,ls = ls,color = 'r',lw = 2,alpha = alpha) if arg_dict['set_xticks'] == False: plt.xticks([]) print('plotting Surprise ...') plt.subplot(num_row,1,5,sharex=ax) plt.plot(t_winpos + winsize/2., Js_dict['Js'],lw = arg_dict['lw'],color = 'k') plt.xlim(0, (max(t_winpos) + winsize).rescale('ms').magnitude) plt.axhline(Js_sig,ls = '-', color = 'gray') plt.axhline(-Js_sig,ls = '-', color = 'gray') plt.xticks(t_winpos.magnitude[::int(len(t_winpos)/10)]) plt.yticks([-2,0,2],fontsize = arg_dict['fontsize']) plt.ylabel('S',fontsize = arg_dict['fontsize']) plt.xlabel('Time [ms]', fontsize = arg_dict['fontsize']) plt.ylim(arg_dict['S_ylim']) for key in arg_dict['events'].keys(): for e_val in arg_dict['events'][key]: plt.axvline(e_val,ls = ls,color = 'r',lw = arg_dict['lw'],alpha = alpha) plt.gca().text(e_val - 10*pq.ms,2*arg_dict['S_ylim'][0],key,fontsize = arg_dict['fontsize'],color = 'r') if arg_dict['set_xticks'] == False: plt.xticks([]) if arg_dict['save_fig'] == True: plt.savefig(arg_dict['path_filename_format']) if arg_dict['showfig'] == False: plt.cla() plt.close() if arg_dict['showfig'] == True: plt.show()
doc/tutorials/unitary_event_analysis.ipynb
apdavison/elephant
bsd-3-clause
Calculate Unitary Events
UE = ue.jointJ_window_analysis( spiketrains, binsize=5*pq.ms, winsize=100*pq.ms, winstep=10*pq.ms, pattern_hash=[3]) plot_UE( spiketrains, UE, ue.jointJ(0.05),binsize=5*pq.ms,winsize=100*pq.ms,winstep=10*pq.ms, pat=ue.inverse_hash_from_pattern([3], N=2), N=2, t_winpos=ue._winpos(0*pq.ms,spiketrains[0][0].t_stop,winsize=100*pq.ms,winstep=10*pq.ms)) plt.show()
doc/tutorials/unitary_event_analysis.ipynb
apdavison/elephant
bsd-3-clause
To use the Riot api, one more important thing to do is to get your own API key. API key can be obtained from here. Note that normal developr API key has a narrow request limit, whereas production API key for commercial use has a looser requirement of request limit. For now, we are just gonna use the normal API key for demonstration. After getting your own API key, put it in the config dictionary below:
config = { 'key': 'API_key', }
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
<a name="architecture"></a>Project Architecture <a name="crawl"></a>Data Crawling The architecture for data crawler is shown as follow: The process of crawling data could be simplified as follows: 1) Get summoners list from LOL server; 2) For each summoner, get his/her top 3 frequently played champions; 3) Fetch each champion's game stats for 2016 season (latest entire season); 4) Put the fetched data into corresponding csv file for storage. <a name="summonerList"></a>1. Fetch summoner list First of all, we need to fetch the summoner information down. Riot has provided with the api to get summoner information by leagues. League is the partial data in a tier. For example, in gold tier, we have summoners in gold I, gold II, gold III, gold IV and gold V. The summoners in gold tier are divided into several leagues, each league contains summoners in all ranges. The __init__ method of RiotCrawler define the tiers, get_player_by_tier fetches the summoner list in different leagues according to the provided summoner ids.
class RiotCrawler: def __init__(self, key): self.key = key self.w = RiotWatcher(key) self.tiers = { 'bronze': [], 'silver': [], 'gold': [], 'platinum': [], 'diamond': [], 'challenger': [], 'master': [], } # def get_player_list(self): # recent_games = self.w.get_recent_games(self.player_id) # player_list = set() # for game in recent_games['games']: # # only pick up ranked games # if 'RANKED' in game['subType']: # fellow_players = game['fellowPlayers'] # for fellow_player in fellow_players: # fellow_player_id = fellow_player['summonerId'] # if fellow_player_id not in player_list: # player_list.add(fellow_player_id) # return list(player_list) def get_player_by_tier(self, summoner_id): request_url = 'https://na.api.pvp.net/api/lol/na/v2.5/league/by-summoner/{}?api_key={}'.format( summoner_id, self.key ) response = urllib2.urlopen(request_url) tier_info = ujson.loads(response.read()) tier = tier_info[str(summoner_id)][0]['tier'].lower() entries = tier_info[str(summoner_id)][0]['entries'] level = self.tiers[tier] for entry in entries: level.append(entry['playerOrTeamId']) # for l in level: # print 'summoner id: {}'.format(str(l))
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
get_tier will return a divisioin dictionary, whose keys are the tier name, and values are the summoner id list in each tier. The results are printed in a human-readable format, categorized by tier.
def get_tier(): # challenger: 77759242 # platinum: 53381 # gold: 70359816 # silver: 65213225 # bronze: 22309680 # master: 22551130 # diamond: 34570626 player_ids = [70359816, 77759242, 53381, 65213225, 22309680, 22551130, 34570626] riot_crawler = RiotCrawler(config['key']) for player_id in player_ids: print 'start crawling id: {}'.format(player_id) riot_crawler.get_player_by_tier(player_id) return riot_crawler.tiers tiers = get_tier() for tier, rank_dict in tiers.iteritems(): print '--- {} ---'.format(tier) for summoner in rank_dict: print 'summoner id: {}'.format(summoner) print '--- end of {} ---'.format(tier)
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
<a name="mfpChampions"></a>2. Fetch most frequently played champions Since we already had a dictionary of all user ids mapping to all categories of ranks, we can now use those user ids to get the stats data of their most frequently used champions. We will use the raw RESTful APIs of Riot with python here. And here are all the libraries needed in this process.
import csv import json import os import urllib2
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
Then we can move on and fetch the data we need. Riot gives us the API to get all champions that a user had used during the season. And the response will be in JSON format. After parsing the JSON response, what we need to do is to get the most frequently used champions which can represent a player's level. So we sort the champions list by the number of games that the player used this champioin (totalSessionsPlayed) in descending order. Notice that the first element in the list will always be the champion with id 0, which represents the stats data of all champions that the player used in the season. So we need to skip that. After we filter out the top frequently used champions of a user, we need to save the stats data with the player's tier as the training label into a csv file. In this project, each champion has a corresponding csv file which records all the stats data of this hero with the tier of the player as the training data. Since there are hundreds of champions in League of Legend, we will have hundreds of csv files for training and each file uses the id of champions as the file name. If the file is already created, we will append more stats of this champion to the csv file.
class TopChampion: FIELD_NAMES = ['totalSessionsPlayed', 'totalSessionsLost', 'totalSessionsWon', 'totalChampionKills', 'totalDamageDealt', 'totalDamageTaken', 'mostChampionKillsPerSession', 'totalMinionKills', 'totalDoubleKills', 'totalTripleKills', 'totalQuadraKills', 'totalPentaKills', 'totalUnrealKills', 'totalDeathsPerSession', 'totalGoldEarned', 'mostSpellsCast', 'totalTurretsKilled', 'totalPhysicalDamageDealt', 'totalMagicDamageDealt', 'totalFirstBlood', 'totalAssists', 'maxChampionsKilled', 'maxNumDeaths', 'label'] def __init__(self, key, player_id, label, n): self.label = label self.player_id = player_id self.key = key self.n = n self.top_champions = [] pass def get_top_champions(self): self.top_champions[:] = [] data = urllib2.urlopen( 'https://na.api.pvp.net/api/lol/na/v1.3/stats/by-summoner/' + self.player_id + '/ranked?season=SEASON2016&api_key=' + self.key ).read() json_data = json.loads(data) champions = json_data['champions'] champion_stats = [] for champion in champions: champion_stat = champion['stats'] champion_stat['id'] = champion['id'] champion_stat['label'] = self.label champion_stats.append(champion_stat) pass self.top_champions = sorted(champion_stats, key=lambda x: x['totalSessionsPlayed'], reverse=True)[1:self.n + 1] return self.top_champions pass def save_top_champions(self): for champion in self.top_champions: file_name = '../data/{}.csv'.format(champion['id']) if os.path.isfile(file_name): with open(file_name, 'a') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=self.FIELD_NAMES) writer.writerow( { 'totalSessionsPlayed': champion['totalSessionsPlayed'], 'totalSessionsLost': champion['totalSessionsLost'], 'totalSessionsWon': champion['totalSessionsWon'], 'totalChampionKills': champion['totalChampionKills'], 'totalDamageDealt': champion['totalDamageDealt'], 'totalDamageTaken': champion['totalDamageTaken'], 'mostChampionKillsPerSession': champion['mostChampionKillsPerSession'], 'totalMinionKills': champion['totalMinionKills'], 'totalDoubleKills': champion['totalDoubleKills'], 'totalTripleKills': champion['totalTripleKills'], 'totalQuadraKills': champion['totalQuadraKills'], 'totalPentaKills': champion['totalPentaKills'], 'totalUnrealKills': champion['totalUnrealKills'], 'totalDeathsPerSession': champion['totalDeathsPerSession'], 'totalGoldEarned': champion['totalGoldEarned'], 'mostSpellsCast': champion['mostSpellsCast'], 'totalTurretsKilled': champion['totalTurretsKilled'], 'totalPhysicalDamageDealt': champion['totalPhysicalDamageDealt'], 'totalMagicDamageDealt': champion['totalMagicDamageDealt'], 'totalFirstBlood': champion['totalFirstBlood'], 'totalAssists': champion['totalAssists'], 'maxChampionsKilled': champion['maxChampionsKilled'], 'maxNumDeaths': champion['maxNumDeaths'], 'label': champion['label'] } ) pass pass else: with open(file_name, 'w') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=self.FIELD_NAMES) writer.writeheader() writer.writerow( { 'totalSessionsPlayed': champion['totalSessionsPlayed'], 'totalSessionsLost': champion['totalSessionsLost'], 'totalSessionsWon': champion['totalSessionsWon'], 'totalChampionKills': champion['totalChampionKills'], 'totalDamageDealt': champion['totalDamageDealt'], 'totalDamageTaken': champion['totalDamageTaken'], 'mostChampionKillsPerSession': champion['mostChampionKillsPerSession'], 'totalMinionKills': champion['totalMinionKills'], 'totalDoubleKills': champion['totalDoubleKills'], 'totalTripleKills': champion['totalTripleKills'], 'totalQuadraKills': champion['totalQuadraKills'], 'totalPentaKills': champion['totalPentaKills'], 'totalUnrealKills': champion['totalUnrealKills'], 'totalDeathsPerSession': champion['totalDeathsPerSession'], 'totalGoldEarned': champion['totalGoldEarned'], 'mostSpellsCast': champion['mostSpellsCast'], 'totalTurretsKilled': champion['totalTurretsKilled'], 'totalPhysicalDamageDealt': champion['totalPhysicalDamageDealt'], 'totalMagicDamageDealt': champion['totalMagicDamageDealt'], 'totalFirstBlood': champion['totalFirstBlood'], 'totalAssists': champion['totalAssists'], 'maxChampionsKilled': champion['maxChampionsKilled'], 'maxNumDeaths': champion['maxNumDeaths'], 'label': champion['label'] } ) pass pass pass pass pass
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
With the above class, now we can start crawling the stats data of all champions saving them to csv files by the following code. Notice that this process is pretty slow since we added the sleep methods in our code. Riot APIs have a limitation on the API calls rate. You cannot send more than 500 requests per 10 minutes. So everytime we send a request here, we sleep for 1 second to prevent error responses.
def main(): import time tiers = get_tier() for tier, rank_dict in tiers.iteritems(): print 'starting tier: {}'.format(tier) for summoner_id in rank_dict: print 'tier: {}, summoner id: {}'.format(tier, summoner_id) top_champion = TopChampion(config['key'], summoner_id, tier, 3) top_champion.get_top_champions() top_champion.save_top_champions() time.sleep(1) print 'end tier: {}'.format(tier)
.ipynb_checkpoints/LeagueRank_notebook-checkpoint.ipynb
DavidCorn/LeagueRank
apache-2.0
Vertex SDK: Custom training tabular regression model for batch prediction with explainabilty <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_batch_explain.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for batch prediction with explanation. Dataset The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD. Objective In this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a batch prediction with explanations on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console. The steps performed include: Create a Vertex custom job for training a model. Train the TensorFlow model. Retrieve and load the model artifacts. View the model evaluation. Set explanation parameters. Upload the model as a Vertex Model resource. Make a batch prediction with explanations. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python.
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_custom_tabular_regression_batch_explain.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Wait for completion of batch prediction job Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
if not os.environ["IS_TESTING"]: batch_predict_job.wait()
notebooks/community/sdk/sdk_custom_tabular_regression_batch_explain.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Get the explanations Next, get the explanation results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more explanation requests in a CSV format: CSV header + predicted_label CSV row + explanation, per prediction request
if not os.environ["IS_TESTING"]: import tensorflow as tf bp_iter_outputs = batch_predict_job.iter_outputs() explanation_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("explanation"): explanation_results.append(blob.name) tags = list() for explanation_result in explanation_results: gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{explanation_result}" with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile: for line in gfile.readlines(): print(line)
notebooks/community/sdk/sdk_custom_tabular_regression_batch_explain.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Look at Pandas Dataframes this is italicized
df = pd.read_csv("../data/coal_prod_cleaned.csv") df.head() df.shape # import qgrid # Put imports at the top # qgrid.nbinstall(overwrite=True) # qgrid.show_grid(df[['MSHA_ID', # 'Year', # 'Mine_Name', # 'Mine_State', # 'Mine_County']], remote_js=True) # Check out http://nbviewer.ipython.org/github/quantopian/qgrid/blob/master/qgrid_demo.ipynb for more (including demo)
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Pivot Tables w/ pandas http://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/
!conda install pivottablejs -y df = pd.read_csv("../data/mps.csv", encoding="ISO-8859-1") df.head(10)
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Tab
import numpy as np np.random.
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
shift-tab
np.linspace(start=, )
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
shift-tab-tab (equivalent in in Lab to shift-tab)
np.linspace(50, 150, num=100,)
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
shift-tab-tab-tab-tab (doesn't work in lab)
np.linspace(start=, )
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit