markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Add a column
responses=househld[['REGION','WTFA_HH']].groupby('REGION').count() responses.name = "Responses" by_region['Responses']=responses by_region
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
And we will change the index to a more complex one, based on the documentation of the household file.
by_region.index=['Northeast','Midwest','South','West'] by_region
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Saving this result We can use any of the to_xyz() functions to save this data to a file. Here we don't supply a path to save the data, which in turn just returns the result in the requested format.
print(by_region.to_json())
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Dealing with missing values It appears that the household file also holds information about why people did not respond. This field is empty if people responded. We are going to use that to filter the data, with a boolean index. We will use the NON_INTV response code to create the boolean index
non_response_code=househld['NON_INTV'] import math # If the value Is Not A Number math.isnan() will return True. responded=[math.isnan(x) for x in non_response_code] notresponded=[not math.isnan(x) for x in non_response_code] resp=househld[responded] nonresp=househld[notresponded] print("Total size: {}".format(househld.shape)) print("Responses: {}".format(resp.shape)) print("Non responses: {}".format(nonresp.shape))
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Now we create a group by the reason code, why people did not respond
non_intv_group=nonresp.groupby('NON_INTV') non_intv_group.size()
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Filling missing data If we just plot the data from the original DataFrame, we only get the data with a value. We can use the fillna() function to solve that and see all data.
househld['INTV_MON'].hist(by=househld['NON_INTV'].fillna(0))
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Neural nets All nets inherit from sklearn.BaseEstimator and have the same interface as another wrappers in REP (details see in 01-howto-Classifiers) All of these nets libraries support: classification multi-classification regression multi-target regresssion additional fitting (using partial_fit method) and don't support: staged prediction methods weights for data Variables used in training
variables = list(data.columns[:25])
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Simple training
tn = TheanetsClassifier(features=variables, layers=[20], trainers=[{'optimize': 'nag', 'learning_rate': 0.1}]) tn.fit(train_data, train_labels)
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Predicting probabilities, measuring the quality
# predict probabilities for each class prob = tn.predict_proba(test_data) print prob print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Theanets multistage training In some cases we need to continue training: i.e., we have new data or current trainer is not efficient anymore. For this purpose there is partial_fit method, where you can continue training using different trainer or different data.
tn = TheanetsClassifier(features=variables, layers=[10, 10], trainers=[{'optimize': 'rprop'}]) tn.fit(train_data, train_labels) print('training complete')
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Second stage of fitting
tn.partial_fit(train_data, train_labels, **{'optimize': 'adadelta'}) # predict probabilities for each class prob = tn.predict_proba(test_data) print prob print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Let's train network using Rprop algorithm
import neurolab nl = NeurolabClassifier(features=variables, layers=[10], epochs=40, trainf=neurolab.train.train_rprop) nl.fit(train_data, train_labels) print('training complete')
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Pybrain
from rep.estimators import PyBrainClassifier print PyBrainClassifier.__doc__ pb = PyBrainClassifier(features=variables, layers=[10, 2], hiddenclass=['TanhLayer', 'SigmoidLayer']) pb.fit(train_data, train_labels) print('training complete')
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Advantages of common interface Let's build an ensemble of neural networks. This will be done by bagging meta-algorithm Bagging over Theanets classifier (same can be done with any neural network) in practice, one will need many networks to get predictions better, then obtained by one network
from sklearn.ensemble import BaggingClassifier base_tn = TheanetsClassifier(layers=[20], trainers=[{'min_improvement': 0.01}]) bagging_tn = BaggingClassifier(base_estimator=base_tn, n_estimators=3) bagging_tn.fit(train_data[variables], train_labels) print('training complete') prob = bagging_tn.predict_proba(test_data[variables]) print 'AUC', roc_auc_score(test_labels, prob[:, 1])
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Gaussian Processes model for functions/continuous output for new input returns predicted output and uncertainty
display(Image(filename="GP_uq.png", width=630)) #source: http://scikit-learn.org/0.17/modules/gaussian_process.html
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Support Vector Machines model for classification map data nonlinearly to higher dimensionsal space separate points of different classes using a plane (i.e. linearly)
display(Image(filename="SVM.png", width=700)) #source: https://en.wikipedia.org/wiki/Support_vector_machine
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Feature engineering and two classification algorithms Feature engineering in Machine Learning feature engineering: map data to features with function $\FM:\IS\to \RKHS$ handle nonlinear relations with linear methods ($\FM$ nonlinear) handle non-numerical data (e.g. text)
display(Image(filename="monomials_small.jpg", width=800)) #source: Berhard Schölkopf
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Working in Feature Space want Feature Space $\RKHS$ (the codomain of $\FM$) to be vector space to get nice mathematical structure definition of inner products induces norms and possibility to measure angles can use linear algebra in $\RKHS$ to solve ML problems inner products angles norms distances induces nonlinear operations on the Input Space (domain of $\FM$) Two simple classification algorithms given data points from mixture of two distributions with densities $p_0,p_1$: $$x_i \sim 0.5 p_0 + 0.5 p_1$$ and label $l_i = 0$ if $x_i$ generated by $p_0$, $l_i = 1$ otherwise
figkw = {"figsize":(4,4), "dpi":150} np.random.seed(5) samps_per_distr = 20 data = np.vstack([stats.multivariate_normal(np.array([-2,0]), np.eye(2)*1.5).rvs(samps_per_distr), stats.multivariate_normal(np.array([2,0]), np.eye(2)*1.5).rvs(samps_per_distr)]) distr_idx = np.r_[[0]*samps_per_distr, [1]*samps_per_distr] f = pl.figure(**figkw); for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: plt.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha = 0.4) pl.show()
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Classification using inner products in Feature Space compute mean feature space embedding $$\mu_{0} = \frac{1}{N_0} \sum_{l_i = 0} \FM(x_i) ~~~~~~~~ \mu_{1} = \frac{1}{N_1} \sum_{l_i = 1} \FM(x_i)$$ assign test point to most similar class in terms of inner product between point and mean embedding $\prodDot{\FM(x)}{\mu_c}$ $$f_d(x) = \argmax_{c\in{0,~1}} \prodDot{\FM(x)}{\mu_c}$$ (remember in $\Reals^2$ canonically: $\prodDot{a}{b} = a_1 b_1+a_2 b_2 $)
pl.figure(**figkw) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha=0.2) pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.3, width=0.05, head_length=0.3, fc=c, ec=c) pl.title(r"Mean embeddings for $\Phi(x)=x$"); pl.figure(**figkw) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha=0.2) pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.3, width=0.05, head_length=0.3, fc=c, ec=c) pl.title(r"Mean embeddings for $\Phi(x)=x$"); pl.scatter(np.ones(1), np.ones(1), c='k', marker='D', alpha=0.8);
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Classification using density estimation estimate density for each class by centering a gaussian, taking mixture as estimate $$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$
# Some plotting code def apply_to_mg(func, *mg): #apply a function to points on a meshgrid x = np.vstack([e.flat for e in mg]).T return np.array([func(i.reshape((1,2))) for i in x]).reshape(mg[0].shape) def plot_with_contour(samps, data_idx, cont_func, method_name = None, delta = 0.025, pl = pl, colormesh_cmap = pl.cm.Pastel2, contour_classif = True): x = np.arange(samps.T[0].min()-delta, samps.T[1].max()+delta, delta) y = np.arange(samps.T[1].min()-delta, samps.T[1].max()+delta, delta) X, Y = np.meshgrid(x, y) Z = apply_to_mg(cont_func, X,Y) Z = Z.reshape(X.shape) fig = pl.figure(**figkw) if colormesh_cmap is not None: bound = np.abs(Z).max() pl.pcolor(X, Y, Z , cmap=colormesh_cmap, alpha=0.5, edgecolors=None, vmin=-bound, vmax=bound) if contour_classif is True: c = pl.contour(X, Y, Z, colors=['k', ], alpha = 0.5, linestyles=[ '--'], levels=[0], linewidths=0.7) else: pl.contour(X, Y, Z, linewidths=0.7) if method_name is not None: pl.title(method_name) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha = 0.4) pl.show() pl.close() est_dens_1 = dist.mixt(2, [dist.mvnorm(x, np.eye(2)*0.1) for x in data[:4]], [1./4]*4) plot_with_contour(data, distr_idx, lambda x: exp(est_dens_1.logpdf(x)), colormesh_cmap=None, contour_classif=False) est_dens_1 = dist.mixt(2, [dist.mvnorm(x, np.eye(2)*0.1,10) for x in data[:samps_per_distr]], [1./samps_per_distr]*samps_per_distr) plot_with_contour(data, distr_idx, lambda x: exp(est_dens_1.logpdf(x)), colormesh_cmap=None, contour_classif=False)
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Classification using density estimation estimate density for each class by centering a gaussian, taking mixture as estimate $$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$ assign test point $x$ to class $c$ that gives highest value for $\widehat{p}_c(x)$ $\widehat{p}_c$ is known as a kernel density estimate (KDE) different but overlapping notion of 'kernel' classification algorithm known as Parzen windows classification <center><h3> For a certain feature map and inner product, both algorithms are the same!</h3></center> <center>Let's construct this feature map and inner product.</center> Kernels and feature space Positive definite functions and feature spaces let $\PDK:\IS\times\IS \to \Reals$, called a kernel if $\PDK$ is a symmetric and positive semi definite (psd) then there exists $\FM: \IS \to \RKHS$ to a hilbert space $\RKHS$ such that $$\PDK(x_i, x_j) = \prodDot{\FM(x_i)}{\FM(x_j)}_\RKHS$$ i.e. $\PDK$ computes inner product after mapping to some $\RKHS$ Gram matrix (1) If all matrices $$\Gram_{X}=\begin{bmatrix} \PDK(x_1, x_1) & \dots & \PDK(x_1, x_N)\ \PDK(x_2, x_1) & \ddots & \vdots\ \vdots & & \vdots\ \PDK(x_N, x_1) & \dots & \PDK(x_N, x_N) \end{bmatrix}$$ are symmetric positive semidefinite, then $\PDK$ is a psd kernel called a gram matrix Gram matrix (2) sometimes mixed gram matrices are needed $$\Gram_{XY} = \begin{bmatrix} \PDK(x_1, y_1) & \PDK(x_1, y_2) & \dots & \PDK(x_1, y_M)\ \PDK(x_2, y_1) & \ddots & &\vdots\ \vdots & & &\vdots\ \PDK(x_N, y_1) & \dots & &\PDK(x_N, y_M) \end{bmatrix} $$ Examples of psd kernels Linear $\PDK_L(x,x') = \prodDot{x}{x'} = x_1 x'_1 + x_2 x'_2+ \dots$ Gaussian $\PDK_G(x,x') = \exp({-{ 0.5}(x-x' )^{\top }\Sigma ^{-1}(x-x' )})$ PSD kernels easy to construct $\PDK$ given $\FM$: $\PDK(x_i, x_j) = \prodDot{\FM(x_i)}{\FM(x_j)}$ construction for $\FM$ given $\PDK$ not trivial but still elementary given $\FM$ and inner product in feature space we can endow space with norm induced by the inner product $$\|g\|\RKHS = \sqrt{\prodDot{g}{g}}\RKHS~~~\textrm{for}~g \in \RKHS$$ can measure angles in the new space $$\prodDot{g}{f}\RKHS = \cos(\angle[g,f])~\|g\|\RKHS ~\|f\|_\RKHS$$ Construction of the canonical feature map (Aronszajn map) Plan - construction of $\FM$ from $\PDK$ - definition of inner product in new space $\RKHS$ such that in fact $\PDK(x,x') = \prodDot{\FM(x)}{\FM(x)}$ - feature for each $x \in \IS$ will be a function from $\IS$ to $\Reals$ $$\FM:\IS \to \Reals^\IS$$ Canonical feature map (Aronszajn map) pick $\FM(x) = \PDK(\cdot, x)$ Linear kernel: $\FM_L(x) = \prodDot{\cdot}{x}$ Gaussian kernel: $\FM_G(x) = \exp\left(-0.5{\|\cdot -x \|^2}/{\sigma^2}\right)$. let linear combinations of features also be in $\RKHS$ $$f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i) \in \RKHS$$ for $a_i \in \Reals$ $\RKHS$ a vector space over $\Reals$ : if $f(\cdot)$ and $g(\cdot)$ functions from $\IS$ to $\Reals$, so are $a~f(\cdot)$ for $a \in \Reals, f(\cdot)+g(\cdot)$ Canonical inner product (1) for $f(\cdot)=\sum_{i=1}^m a_i \PDK(\cdot, x_i) \in \RKHS$ and $g(\cdot)=\sum_{j=1}^{m'} b_j \PDK(\cdot, x'j) \in \RKHS$ define inner product $$\prodDot{f}{g} = \sum{i=1}^m \sum_{j=1}^{m'} b_j a_i \PDK(x'_j, x_i)$$ In particular $\prodDot{ \PDK(\cdot,x)}{ \PDK(\cdot,x')}=\PDK(x,x')$ (reproducing property of kernel in its $\RKHS$) Canonical inner product (2) $\RKHS$ a hilbert space with this inner product, as it is positive definite linear in its first argument symmetric complete $\RKHS$ called Reproducing Kernel Hilbert Space (RKHS). Equivalence of classification algorithms Recall mean canonical feature and kernel density estimate $$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \mu_{0} = \frac{1}{N_0} \sum_{l_i = 0} \PDK(\cdot, x_i)$$ observe $$\frac{1}{N_0} \sum_{l_i = 0} \mathcal{N}(x^; x_i,\Sigma) = \prodDot{\frac{1}{N_0} \sum_{l_i = 0} \PDK(\cdot, x_i)}{\PDK(\cdot, x^)}$$ if $\PDK$ is Gaussian density with covariance $\Sigma$ kernel mean and Parzen windows classification are equivalent! Lets look at example classification output
class KMEclassification(object): def __init__(self, samps1, samps2, kernel): self.de1 = ro.RKHSDensityEstimator(samps1, kernel, 0.1) self.de2 = ro.RKHSDensityEstimator(samps2, kernel, 0.1) def classification_score(self, test): return (self.de1.eval_kme(test) - self.de2.eval_kme(test)) data, distr_idx = sklearn.datasets.make_circles(n_samples=400, factor=.3, noise=.05) kc = KMEclassification(data[distr_idx==0,:], data[distr_idx==1,:], ro.LinearKernel()) plot_with_contour(data, distr_idx, kc.classification_score, 'Inner Product classif. '+"Linear", pl = plt, contour_classif = True, colormesh_cmap = pl.cm.bwr) kc = KMEclassification(data[distr_idx==0,:], data[distr_idx==1,:], ro.GaussianKernel(0.3)) plot_with_contour(data, distr_idx, kc.classification_score, 'Inner Product classif. '+"Gaussian", pl = plt, contour_classif = True, colormesh_cmap = pl.cm.bwr)
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Applications Kernel mean embedding mean feature with canonical feature map $\frac{1}{N} \sum_{i = 1}^N \FM(x_i) = \frac{1}{N} \sum_{i = 1}^N \PDK(x_i, \cdot)$ this the estimate of the kernel mean embedding of the distribution/density $\rho$ of $x_i$ $$\mu_\rho(\cdot) = \int \PDK(x,\cdot) \mathrm{d}\rho(x)$$ using this we can define a distance between distributions $\rho, q$ as $$\mathrm{MMD}(\rho, q)^2 = \|\mu_\rho - \mu_q \|^2_\RKHS$$ called the maximum mean discrepancy (MMD) Has been used as the critic in generative adversial networks (i.e. generative network as usual, MMD as drop-in for discriminator) Conditional mean embedding (1) we can define operators on RKHSs these are maps from RKHS elements to RKHS elements (i.e. mapping between functionals) one such operator is the conditional mean embedding $\mathcal{D}_{Y|X}$ given the embedding of the input variables distribution, returns the embedding of the output variables distribution Regression with uncertainty estimate Conditional mean embedding (2) An example
out_samps = data[distr_idx==0,:1] + 1 inp_samps = data[distr_idx==0,1:] + 1 def plot_mean_embedding(cme, inp_samps, out_samps, p1 = 0., p2 = 1., offset = 0.5): x = np.linspace(inp_samps.min()-offset,inp_samps.max()+offset,200) fig = pl.figure(figsize=(10, 5)) ax = [pl.subplot2grid((2, 2), (0, 1)), pl.subplot2grid((2, 2), (0, 0), rowspan=2), pl.subplot2grid((2, 2), (1, 1))] ax[1].scatter(out_samps, inp_samps, alpha=0.3, color = 'r') ax[1].set_xlabel('Output') ax[1].set_ylabel('Input') ax[1].axhline(p1, 0, 8, color='g', linestyle='--') ax[1].axhline(p2, 0, 8, color='b', linestyle='--') ax[1].set_title("%d input-output pairs"%len(out_samps)) ax[1].set_yticks((p1, p2)) e = cme.lhood(np.array([[p1], [p2]]), x[:, None]).T #ax[0].plot(x, d[0], '-', label='cond. density') ax[2].plot(x, e[0], 'g--', label='cond. mean emb.') ax[2].set_title(r"p(outp | inp=%.1f)"%p1) ax[0].plot(x, e[1], 'b--', label='cond. mean emb.') ax[0].set_title(r"p(outp | inp=%.1f)"%p2) #ax[2].legend(loc='best') fig.tight_layout() cme = ro.ConditionMeanEmbedding(inp_samps, out_samps, ro.GaussianKernel(0.3), ro.GaussianKernel(0.3), 5) plot_mean_embedding(cme, inp_samps, out_samps, 0.3, 2.,)
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Conditional mean embedding (3) closed form estimate given samples from input and output $$\begin{bmatrix}\PDK_Y(y_1, \cdot),& \dots &, \PDK_Y(y_N, \cdot)\end{bmatrix} \Gram_X^{-1} \begin{bmatrix}\PDK_X(x_1, \cdot)\ \vdots \ \PDK_X(x_N, \cdot)\end{bmatrix}$$ closed form estimate of output embedding for new input $x^*$ $$\prodDot{\mathcal{D}_{Y|X}}{\PDK(x^,\cdot)} = \begin{bmatrix}\PDK_Y(y_1, \cdot),& \dots &, \PDK_Y(y_N, \cdot)\end{bmatrix} \Gram_X^{-1} \begin{bmatrix}\PDK_X(x_1, x^)\ \vdots \ \PDK_X(x_N, x^*)\end{bmatrix}$$ Conditional mean embedding (4) Similar to Gaussian processes, but output distribution more flexible mixture of Gaussians, Laplace, distributions on discrete objects multimodality can be represented multidimensional output output can be combination of e.g. string and reals Conditional mean embedding was used to construct Kernel Bayes Rule, enabling closed form Bayesian inference other types of operators have been derived (see Stefan Klus' talk next week)
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/MpzaCCbX-z4?rel=0&amp;showinfo=0&amp;start=148" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>') display(Image(filename="Pendulum_eigenfunctions.png", width=700)) display(Image(filename="KeywordClustering.png", width=700))
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
In this example you will learn how to make use of the periodicity of the electrodes. As seen in TB 4 the transmission calculation takes a considerable amount of time. In this example we will redo the same calculation, but speed it up (no approximations made). A large computational effort is made on calculating the self-energies which basically is inverting, multiplying and adding matrices, roughly 10-20 times per $k$-point, per energy point, per electrode. For systems with large electrodes compared to the full device, this becomes more demanding than calculating the Green function for the system. When there is periodicity in electrodes along the transverse semi-infinite direction (not along the transport direction) one can utilize Bloch's theorem to reduce the computational cost of calculating the self-energy. In ANY calculation if you have periodicity, please USE it. In this example you should scour the tbtrans manual on how to enable Bloch's theorem, and once enabled it should be roughly 3 - 4 times as fast, something that is non-negligeble for large systems.
graphene = sisl.geom.graphene(orthogonal=True)
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Note the below lines are differing from the same lines in TB 4, i.e. we save the electrode electronic structure without extending it 25 times.
H_elec = sisl.Hamiltonian(graphene) H_elec.construct(([0.1, 1.43], [0., -2.7])) H_elec.write('ELEC.nc')
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
See TB 2 for details on why we choose repeat/tile on the Hamiltonian object and not on the geometry, prior to construction.
H = H_elec.repeat(25, axis=0).tile(15, axis=1) H = H.remove( H.geometry.close( H.geometry.center(what='cell'), R=10.) ) dangling = [ia for ia in H.geometry.close(H.geometry.center(what='cell'), R=14.) if len(H.edges(ia)) < 3] H = H.remove(dangling) edge = [ia for ia in H.geometry.close(H.geometry.center(what='cell'), R=14.) if len(H.edges(ia)) < 4] edge = np.array(edge) # Pretty-print the list of atoms print(sisl.utils.list2str(edge + 1)) H.geometry.write('device.xyz') H.write('DEVICE.nc')
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Exercises Instead of analysing the same thing as in TB 4 you should perform the following actions to explore the available data-analysis capabilities of TBtrans. Please note the difference in run-time between example 04 and this example. Always use Bloch's theorem when applicable! HINT please copy as much as you like from example 04 to simplify the following tasks. Read in the resulting file into a variable called tbt. In the following we will concentrate on only looking at $\Gamma$-point related quantities. I.e. all quantities should only be plotted for this $k$-point. To extract information for one or more subset of points you should look into the function help(tbt.kindex) which may be used to find a resulting $k$-point index in the result file. Plot the transmission ($\Gamma$-point only). To extract a subset $k$-point you should read the documentation for the functions (hint: kavg is the keyword you are looking for). Full transmission Bulk transmission Plot the DOS with normalization according to the number of atoms ($\Gamma$ only) You may decide which atoms you examine. The Green function DOS The spectral DOS The bulk DOS TIME: Do the same calculation using only tiling. H_elec.tile(25, axis=0).tile(15, axis=1) instead of repeat/tile. Which of repeat or tile are faster? Transmission Density of states
tbt = sisl.get_sile('siesta.TBT.nc') # Easier manipulation of the geometry geom = tbt.geometry a_dev = tbt.a_dev # the indices where we have DOS # Extract the DOS, per orbital (hence sum=False) DOS = tbt.ADOS(0, sum=False) # Normalize DOS for plotting (maximum size == 400) # This array has *all* energy points and orbitals DOS /= DOS.max() / 400 a_xyz = geom.xyz[a_dev, :2] %%capture fig = plt.figure(figsize=(12,4)); ax = plt.axes(); scatter = ax.scatter(a_xyz[:, 0], a_xyz[:, 1], 1); ax.set_xlabel(r'$x$ [Ang]'); ax.set_ylabel(r'$y$ [Ang]'); ax.axis('equal'); # If this animation does not work, then don't spend time on it! def animate(i): ax.set_title('Energy {:.3f} eV'.format(tbt.E[i])); scatter.set_sizes(DOS[i]); return scatter, anim = animation.FuncAnimation(fig, animate, frames=len(tbt.E), interval=100, repeat=False) HTML(anim.to_html5_video())
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
We're first going to train a multinomial logistic regression using simple gradient descent. TensorFlow works like this: * First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below: with graph.as_default(): ... Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below: with tf.Session(graph=graph) as session: ... Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
# With gradient descent training, even this much (10000) data is prohibitive. # Subset the training data for faster turnaround. train_subset = 10000 #10000 graph = tf.Graph() with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_dataset[:train_subset, :]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
google_dl_udacity/lesson3/2_fullyconnected.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
results lesson 1 sklearn LogisticRegression 50 training samples: LogisticRegression score: 0.608200 100 training samples: LogisticRegression score: 0.708200 1000 training samples: LogisticRegression score: 0.829200 5000 training samples: LogisticRegression score: 0.846200 tensor flow results above 50: 43.3% 100: 53.1% 1000: 76.8% 5000: 81.6% 10000: 82.0% Let's now switch to stochastic gradient descent training instead, which is much faster. The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
google_dl_udacity/lesson3/2_fullyconnected.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Demons Registration This function will align the fixed and moving images using the Demons registration method. If given a mask, the similarity metric will be evaluated using points sampled inside the mask. If given fixed and moving points the similarity metric value and the target registration errors will be displayed during registration. As this notebook performs intra-modal registration, we can readily use the Demons family of algorithms. We start by using the registration framework with SetMetricAsDemons. We use a multiscale approach which is readily available in the framework. We then illustrate how to use the Demons registration filters that are not part of the registration framework.
def demons_registration(fixed_image, moving_image, fixed_points = None, moving_points = None): registration_method = sitk.ImageRegistrationMethod() # Create initial identity transformation. transform_to_displacment_field_filter = sitk.TransformToDisplacementFieldFilter() transform_to_displacment_field_filter.SetReferenceImage(fixed_image) # The image returned from the initial_transform_filter is transferred to the transform and cleared out. initial_transform = sitk.DisplacementFieldTransform(transform_to_displacment_field_filter.Execute(sitk.Transform())) # Regularization (update field - viscous, total field - elastic). initial_transform.SetSmoothingGaussianOnUpdate(varianceForUpdateField=0.0, varianceForTotalField=2.0) registration_method.SetInitialTransform(initial_transform) registration_method.SetMetricAsDemons(10) #intensities are equal if the difference is less than 10HU # Multi-resolution framework. registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1]) registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[8,4,0]) registration_method.SetInterpolator(sitk.sitkLinear) # If you have time, run this code as is, otherwise switch to the gradient descent optimizer registration_method.SetOptimizerAsConjugateGradientLineSearch(learningRate=1.0, numberOfIterations=20, convergenceMinimumValue=1e-6, convergenceWindowSize=10) #registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=20, convergenceMinimumValue=1e-6, convergenceWindowSize=10) registration_method.SetOptimizerScalesFromPhysicalShift() # If corresponding points in the fixed and moving image are given then we display the similarity metric # and the TRE during the registration. if fixed_points and moving_points: registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot) registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot) registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points)) return registration_method.Execute(fixed_image, moving_image)
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Running the Demons registration on this data will <font color="red">take a long time</font> (run it before going home). If you are less interested in accuracy you can switch the optimizer from conjugate gradient to gradient, will run much faster but the results are worse.
#%%timeit -r1 -n1 # Uncomment the line above if you want to time the running of this cell. # Select the fixed and moving images, valid entries are in [0,9] fixed_image_index = 0 moving_image_index = 7 tx = demons_registration(fixed_image = images[fixed_image_index], moving_image = images[moving_image_index], fixed_points = points[fixed_image_index], moving_points = points[moving_image_index] ) initial_errors_mean, initial_errors_std, _, initial_errors_max, initial_errors = ru.registration_errors(sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]) final_errors_mean, final_errors_std, _, final_errors_max, final_errors = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index]) plt.hist(initial_errors, bins=20, alpha=0.5, label='before registration', color='blue') plt.hist(final_errors, bins=20, alpha=0.5, label='after registration', color='green') plt.legend() plt.title('TRE histogram'); print('Initial alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max)) print('Final alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
SimpleITK also includes a set of Demons filters which are independent of the ImageRegistrationMethod. These include: 1. DemonsRegistrationFilter 2. DiffeomorphicDemonsRegistrationFilter 3. FastSymmetricForcesDemonsRegistrationFilter 4. SymmetricForcesDemonsRegistrationFilter As these filters are independent of the ImageRegistrationMethod we do not have access to the multiscale framework. Luckily it is easy to implement our own multiscale framework in SimpleITK, which is what we do in the next cell.
def smooth_and_resample(image, shrink_factor, smoothing_sigma): """ Args: image: The image we want to resample. shrink_factor: A number greater than one, such that the new image's size is original_size/shrink_factor. smoothing_sigma: Sigma for Gaussian smoothing, this is in physical (image spacing) units, not pixels. Return: Image which is a result of smoothing the input and then resampling it using the given sigma and shrink factor. """ smoothed_image = sitk.SmoothingRecursiveGaussian(image, smoothing_sigma) original_spacing = image.GetSpacing() original_size = image.GetSize() new_size = [int(sz/float(shrink_factor) + 0.5) for sz in original_size] new_spacing = [((original_sz-1)*original_spc)/(new_sz-1) for original_sz, original_spc, new_sz in zip(original_size, original_spacing, new_size)] return sitk.Resample(smoothed_image, new_size, sitk.Transform(), sitk.sitkLinear, image.GetOrigin(), new_spacing, image.GetDirection(), 0.0, image.GetPixelIDValue()) def multiscale_demons(registration_algorithm, fixed_image, moving_image, initial_transform = None, shrink_factors=None, smoothing_sigmas=None): """ Run the given registration algorithm in a multiscale fashion. The original scale should not be given as input as the original images are implicitly incorporated as the base of the pyramid. Args: registration_algorithm: Any registration algorithm that has an Execute(fixed_image, moving_image, displacement_field_image) method. fixed_image: Resulting transformation maps points from this image's spatial domain to the moving image spatial domain. moving_image: Resulting transformation maps points from the fixed_image's spatial domain to this image's spatial domain. initial_transform: Any SimpleITK transform, used to initialize the displacement field. shrink_factors: Shrink factors relative to the original image's size. smoothing_sigmas: Amount of smoothing which is done prior to resmapling the image using the given shrink factor. These are in physical (image spacing) units. Returns: SimpleITK.DisplacementFieldTransform """ # Create image pyramid. fixed_images = [fixed_image] moving_images = [moving_image] if shrink_factors: for shrink_factor, smoothing_sigma in reversed(list(zip(shrink_factors, smoothing_sigmas))): fixed_images.append(smooth_and_resample(fixed_images[0], shrink_factor, smoothing_sigma)) moving_images.append(smooth_and_resample(moving_images[0], shrink_factor, smoothing_sigma)) # Create initial displacement field at lowest resolution. # Currently, the pixel type is required to be sitkVectorFloat64 because of a constraint imposed by the Demons filters. if initial_transform: initial_displacement_field = sitk.TransformToDisplacementField(initial_transform, sitk.sitkVectorFloat64, fixed_images[-1].GetSize(), fixed_images[-1].GetOrigin(), fixed_images[-1].GetSpacing(), fixed_images[-1].GetDirection()) else: initial_displacement_field = sitk.Image(fixed_images[-1].GetWidth(), fixed_images[-1].GetHeight(), fixed_images[-1].GetDepth(), sitk.sitkVectorFloat64) initial_displacement_field.CopyInformation(fixed_images[-1]) # Run the registration. initial_displacement_field = registration_algorithm.Execute(fixed_images[-1], moving_images[-1], initial_displacement_field) # Start at the top of the pyramid and work our way down. for f_image, m_image in reversed(list(zip(fixed_images[0:-1], moving_images[0:-1]))): initial_displacement_field = sitk.Resample (initial_displacement_field, f_image) initial_displacement_field = registration_algorithm.Execute(f_image, m_image, initial_displacement_field) return sitk.DisplacementFieldTransform(initial_displacement_field)
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Now we will use our newly minted multiscale framework to perform registration with the Demons filters. Some things you can easily try out by editing the code below: 1. Is there really a need for multiscale - just call the multiscale_demons method without the shrink_factors and smoothing_sigmas parameters. 2. Which Demons filter should you use - configure the other filters and see if our selection is the best choice (accuracy/time).
# Define a simple callback which allows us to monitor the Demons filter's progress. def iteration_callback(filter): print('\r{0}: {1:.2f}'.format(filter.GetElapsedIterations(), filter.GetMetric()), end='') fixed_image_index = 0 moving_image_index = 7 # Select a Demons filter and configure it. demons_filter = sitk.FastSymmetricForcesDemonsRegistrationFilter() demons_filter.SetNumberOfIterations(20) # Regularization (update field - viscous, total field - elastic). demons_filter.SetSmoothDisplacementField(True) demons_filter.SetStandardDeviations(2.0) # Add our simple callback to the registration filter. demons_filter.AddCommand(sitk.sitkIterationEvent, lambda: iteration_callback(demons_filter)) # Run the registration. tx = multiscale_demons(registration_algorithm=demons_filter, fixed_image = images[fixed_image_index], moving_image = images[moving_image_index], shrink_factors = [4,2], smoothing_sigmas = [8,4]) # Compare the initial and final TREs. initial_errors_mean, initial_errors_std, _, initial_errors_max, initial_errors = ru.registration_errors(sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]) final_errors_mean, final_errors_std, _, final_errors_max, final_errors = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index]) plt.hist(initial_errors, bins=20, alpha=0.5, label='before registration', color='blue') plt.hist(final_errors, bins=20, alpha=0.5, label='after registration', color='green') plt.legend() plt.title('TRE histogram'); print('\nInitial alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max)) print('Final alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
A Slightly Bigger Word-Document Matrix The example word-document matrix is taken from http://makeyourowntextminingtoolkit.blogspot.co.uk/2016/11/so-many-dimensions-and-how-to-reduce.html but expanded to cover a 3rd topic related to a home or house
# create a simple word-document matrix as a pandas dataframe, the content values have been normalised words = ['wheel', ' seat', ' engine', ' slice', ' oven', ' boil', 'door', 'kitchen', 'roof'] print(words) documents = ['doc1', 'doc2', 'doc3', 'doc4', 'doc5', 'doc6', 'doc7', 'doc8', 'doc9'] word_doc = pandas.DataFrame([[0.5,0.3333, 0.25, 0, 0, 0, 0, 0, 0], [0.25, 0.3333, 0, 0, 0, 0, 0, 0.25, 0], [0.25, 0.3333, 0.75, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0.5, 0.5, 0.6, 0, 0, 0], [0, 0, 0, 0.3333, 0.1667, 0, 0.5, 0, 0], [0, 0, 0, 0.1667, 0.3333, 0.4, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0.25, 0.25], [0, 0, 0, 0, 0, 0, 0.5, 0.25, 0.25], [0, 0, 0, 0, 0, 0, 0, 0.25, 0.5]], index=words, columns=documents) # and show it word_doc
A03_svd_applied_to_slightly_bigger_word_document_matrix.ipynb
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit
gpl-2.0
Yes, that worked .. the reconstructed A2 is the same as the original A (within the bounds of small floating point accuracy) Now Reduce Dimensions, Extract Topics Here we use only the top 3 values of the S singular value matrix, pretty brutal reduction in dimensions! Why 3, and not 2? We'll only plot 2 dimensions for the document cluster view, and later we'll use 3 dimensions for the topic word view
# S_reduced is the same as S but with only the top 3 elements kept S_reduced = numpy.zeros_like(S) # only keep top two eigenvalues l = 3 S_reduced[:l, :l] = S[:l,:l] # show S_rediced which has less info than original S print("S_reduced =\n", numpy.round(S_reduced, decimals=2))
A03_svd_applied_to_slightly_bigger_word_document_matrix.ipynb
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit
gpl-2.0
The above shows that there are indeed 3 clusters of documents. That matches our expectations as we constructed the example data set that way. Topics from New View of Words
# topics are a linear combination of original words U_S_reduced = numpy.dot(U, S_reduced) df = pandas.DataFrame(numpy.round(U_S_reduced, decimals=2), index=words) # show colour coded so it is easier to see significant word contributions to a topic df.style.background_gradient(cmap=plt.get_cmap('Blues'), low=0, high=2)
A03_svd_applied_to_slightly_bigger_word_document_matrix.ipynb
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit
gpl-2.0
Operations on Tensors Variables and Constants Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable). Constant values can not be changed, while variables values can be. The main difference is that instances of tf.Variable have methods allowing us to change their values while tensors constructed with tf.constant don't have these methods, and therefore their values can not be changed. When you want to change the value of a tf.Variable x use one of the following method: x.assign(new_value) x.assign_add(value_to_be_added) x.assign_sub(value_to_be_subtracted
x = tf.constant([2, 3, 4]) x x = tf.Variable(2.0, dtype=tf.float32, name='my_variable') x.assign(45.8) # TODO 1 x x.assign_add(4) # TODO 2 x x.assign_sub(3) # TODO 3 x
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Point-wise operations Tensorflow offers similar point-wise tensor operations as numpy does: tf.add allows to add the components of a tensor tf.multiply allows us to multiply the components of a tensor tf.subtract allow us to substract the components of a tensor tf.math.* contains the usual math operations to be applied on the components of a tensor and many more... Most of the standard aritmetic operations (tf.add, tf.substrac, etc.) are overloaded by the usual corresponding arithmetic symbols (+, -, etc.)
a = tf.constant([5, 3, 8]) # TODO 1 b = tf.constant([3, -1, 2]) c = tf.add(a, b) d = a + b print("c:", c) print("d:", d) a = tf.constant([5, 3, 8]) # TODO 2 b = tf.constant([3, -1, 2]) c = tf.multiply(a, b) d = a * b print("c:", c) print("d:", d) # tf.math.exp expects floats so we need to explicitly give the type a = tf.constant([5, 3, 8], dtype=tf.float32) b = tf.math.exp(a) print("b:", b)
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
NumPy Interoperability In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
# native python list a_py = [1, 2] b_py = [3, 4] tf.add(a_py, b_py) # TODO 1 # numpy arrays a_np = np.array([1, 2]) b_np = np.array([3, 4]) tf.add(a_np, b_np) # TODO 2 # native TF tensor a_tf = tf.constant([1, 2]) b_tf = tf.constant([3, 4]) tf.add(a_tf, b_tf) # TODO 3
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Gradient Function To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to! During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. For that we need to wrap our loss computation within the context of tf.GradientTape instance which will reccord gradient information: python with tf.GradientTape() as tape: loss = # computation This will allow us to later compute the gradients of any tensor computed within the tf.GradientTape context with respect to instances of tf.Variable: python gradients = tape.gradient(loss, [w0, w1]) We illustrate this procedure with by computing the loss gradients with respect to the model weights:
# TODO 1 def compute_gradients(X, Y, w0, w1): with tf.GradientTape() as tape: loss = loss_mse(X, Y, w0, w1) return tape.gradient(loss, [w0, w1]) w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) dw0, dw1 = compute_gradients(X, Y, w0, w1) print("dw0:", dw0.numpy()) print("dw1", dw1.numpy())
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Quick numbers: # RRT events & total # encounters (for the main hospital) For all patient & location types
query_TotalEncs = """ SELECT count(1) FROM ( SELECT DISTINCT encntr_id FROM encounter WHERE encntr_complete_dt_tm < 4000000000000 AND loc_facility_cd = '633867' ) t; """ cur.execute(query_TotalEncs) cur.fetchall()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
For admit_type_cd!='0' & encntr_type_class_cd='391
query_TotalEncs = """ SELECT count(1) FROM ( SELECT DISTINCT encntr_id FROM encounter WHERE encntr_complete_dt_tm < 4e12 AND loc_facility_cd = '633867' AND admit_type_cd!='0' AND encntr_type_class_cd='391' ) t; """ cur.execute(query_TotalEncs) cur.fetchall()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Examining distribution of encounter durations (with loc_facility_cd) Analyze the durations of the RRT event patients.
query_count = """ SELECT count(*) FROM ( SELECT DISTINCT ce.encntr_id FROM clinical_event ce INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id WHERE ce.event_cd = '54411998' AND ce.result_status_cd NOT IN ('31', '36') AND ce.valid_until_dt_tm > 4e12 AND ce.event_class_cd not in ('654645') AND enc.loc_facility_cd = '633867' AND enc.encntr_complete_dt_tm < 4e12 AND enc.admit_type_cd!='0' AND enc.encntr_type_class_cd='391' ) AS A ; """ cur.execute(query_count) cur.fetchall() query_count = """ SELECT count(*) FROM ( SELECT DISTINCT encntr_id FROM encounter enc WHERE enc.loc_facility_cd = '633867' AND enc.encntr_complete_dt_tm < 4e12 AND enc.admit_type_cd!='0' AND enc.encntr_type_class_cd='391' AND encntr_id NOT IN ( SELECT DISTINCT ce.encntr_id FROM clinical_event ce INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id WHERE ce.event_cd = '54411998' AND ce.result_status_cd NOT IN ('31', '36') AND ce.valid_until_dt_tm > 4e12 AND ce.event_class_cd not in ('654645') ) ) AS A; """ cur.execute(query_count) cur.fetchall()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Let's look at durations for inpatients WITH RRTs from the Main Hospital where encounter_admit_type is not zero
query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours , enc.reason_for_visit , enc.admit_src_cd , enc.admit_type_cd FROM clinical_event ce INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id LEFT OUTER JOIN ( SELECT ti.encntr_id AS encntr_id , MIN(tc.checkin_dt_tm) AS checkin_dt_tm FROM tracking_item ti JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id GROUP BY ti.encntr_id ) tci ON tci.encntr_id = enc.encntr_id WHERE enc.loc_facility_cd = '633867' AND enc.encntr_complete_dt_tm < 4e12 AND enc.admit_type_cd!='0' AND enc.encntr_type_class_cd='391' AND enc.encntr_id IN ( SELECT DISTINCT ce.encntr_id FROM clinical_event ce WHERE ce.event_cd = '54411998' AND ce.result_status_cd NOT IN ('31', '36') AND ce.valid_until_dt_tm > 4e12 AND ce.event_class_cd not in ('654645') ) ;""" cur.execute(query) df_rrt = as_pandas(cur) df_rrt.head() df_rrt.describe().T # the mean stay is 292 hours (12.1 days). # The median stay is 184 hours (7.67 days) # The minimum stay is 8 hours. The longest stay is 3550 hours (~148 days) plt.figure() df_rrt.diff_hours.hist(bins = 300) plt.xlim(0, 600) # Records with short durations: df_rrt[df_rrt.diff_hours < 12]
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Let's look at durations for inpatients WITHOUT RRTs from the Main Hospital where encounter_admit_type is not zero
query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm , enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours , enc.reason_for_visit , enc.admit_src_cd , enc.admit_type_cd FROM clinical_event ce INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id LEFT OUTER JOIN ( SELECT ti.encntr_id AS encntr_id , MIN(tc.checkin_dt_tm) AS checkin_dt_tm FROM tracking_item ti JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id GROUP BY ti.encntr_id ) tci ON tci.encntr_id = enc.encntr_id WHERE enc.loc_facility_cd = '633867' AND enc.encntr_complete_dt_tm < 4e12 AND enc.admit_type_cd!='0' AND enc.encntr_type_class_cd='391' AND enc.encntr_id NOT IN ( SELECT DISTINCT ce.encntr_id FROM clinical_event ce WHERE ce.event_cd = '54411998' AND ce.result_status_cd NOT IN ('31', '36') AND ce.valid_until_dt_tm > 4e12 AND ce.event_class_cd not in ('654645') ) ;""" cur.execute(query) df_nonrrt = as_pandas(cur) df_nonrrt.describe().T # NonRRT: The mean stay is 122 hours (5 days) // RRT: The mean stay is 292 hours (12.1 days). # NonRRT: The median stay is 77 hours (3.21 days)// RRT: The median stay is 184 hours (7.67 days) # NonRRT: The minimum stay is 0.08 hours // RRT: The minimum stay is ~8 hours. plt.figure() df_nonrrt.diff_hours.hist(bins = 500) plt.xlim(0, 600)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Plot both together to see how encounter duration distributions are different
plt.figure(figsize = (10,8)) df_rrt.diff_hours.plot.hist(alpha=0.4, bins=400,normed=True) df_nonrrt.diff_hours.plot.hist(alpha=0.4, bins=800,normed=True) plt.xlabel('Hospital Stay Durations, hours', fontsize=14) plt.ylabel('Normalized Frequency', fontsize=14) plt.legend(['RRT', 'Non RRT']) plt.tick_params(labelsize=14) plt.xlim(0, 1000)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Even accounting for the hospital, inpatients status, and accounting for some admit_type_cd, the durations are still quite different betwen RRT & non-RRT. Trying some subset vizualizations -- these show no difference
print df_nonrrt.admit_type_cd.value_counts() print print df_rrt.admit_type_cd.value_counts() print df_nonrrt.admit_src_cd.value_counts() print print df_rrt.admit_src_cd.value_counts() plt.figure(figsize = (10,8)) df_rrt[df_rrt.admit_type_cd=='309203'].diff_hours.plot.hist(alpha=0.4, bins=300,normed=True) df_nonrrt[df_nonrrt.admit_type_cd=='309203'].diff_hours.plot.hist(alpha=0.4, bins=600,normed=True) # plt.xlabel('Hospital Stay Durations, hours', fontsize=14) # plt.ylabel('Normalized Frequency', fontsize=14) plt.legend(['RRT', 'Non RRT']) plt.tick_params(labelsize=14) plt.xlim(0, 1000) plt.figure(figsize = (10,8)) df_rrt[df_rrt.admit_src_cd=='309196'].diff_hours.plot.hist(alpha=0.4, bins=300,normed=True) df_nonrrt[df_nonrrt.admit_src_cd=='309196'].diff_hours.plot.hist(alpha=0.4, bins=600,normed=True) # plt.xlabel('Hospital Stay Durations, days', fontsize=14) # plt.ylabel('Normalized Frequency', fontsize=14) plt.legend(['RRT', 'Non RRT']) plt.tick_params(labelsize=14) plt.xlim(0, 1000)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Despite controlling for patient parameters, patients with RRT events stay in the hospital longer than non-RRT event having patients. Rerun previous EDA on hospital & patient types Let's take a step back and look at the encounter table, for all hospitals and patient types [but using corrected time duration].
# For encounters with RRT events query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm , enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours , enc.reason_for_visit , enc.admit_type_cd, cv_admit_type.description as admit_type_desc , enc.encntr_type_cd , cv_enc_type.description as enc_type_desc , enc.encntr_type_class_cd , cv_enc_type_class.description as enc_type_class_desc , enc.admit_src_cd , cv_admit_src.description as admit_src_desc , enc.loc_facility_cd , cv_loc_fac.description as loc_desc FROM clinical_event ce INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id LEFT OUTER JOIN code_value cv_admit_type ON enc.admit_type_cd = cv_admit_type.code_value LEFT OUTER JOIN code_value cv_enc_type ON enc.encntr_type_cd = cv_enc_type.code_value LEFT OUTER JOIN code_value cv_enc_type_class ON enc.encntr_type_class_cd = cv_enc_type_class.code_value LEFT OUTER JOIN code_value cv_admit_src ON enc.admit_src_cd = cv_admit_src.code_value LEFT OUTER JOIN code_value cv_loc_fac ON enc.loc_facility_cd = cv_loc_fac.code_value LEFT OUTER JOIN ( SELECT ti.encntr_id AS encntr_id , MIN(tc.checkin_dt_tm) AS checkin_dt_tm FROM tracking_item ti JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id GROUP BY ti.encntr_id ) tci ON tci.encntr_id = enc.encntr_id WHERE enc.encntr_id IN ( SELECT DISTINCT ce.encntr_id FROM clinical_event ce WHERE ce.event_cd = '54411998' AND ce.result_status_cd NOT IN ('31', '36') AND ce.valid_until_dt_tm > 4e12 AND ce.event_class_cd not in ('654645') ) ;""" cur.execute(query) df = as_pandas(cur) df.describe().T # check nulls print df[pd.isnull(df.diff_hours)].count() print print df[~pd.isnull(df.diff_hours)].count() df[pd.isnull(df.diff_hours)] # can't work with the nans in there... delete these rows print df.shape df = df[~pd.isnull(df['depart_dt_tm'])] df = df.reset_index(drop=True) print df.shape df.describe().T # RRT encounters for all patients/hospitals # All RRT: mean stay: 293.5 hours // NonRRT: The mean stay is 122 hours (5 days) // RRT: The mean stay is 292 hours (12.1 days). # All RRT: median stay: 190 hours // NonRRT: The median stay is 77 hours (3.21 days)// RRT: The median stay is 184 hours (7.67 days) # All RRT: min stay: 0 hours // NonRRT: The minimum stay is 0.08 hours // RRT: The minimum stay is ~8 hours. # Let's be suspicious of short encounters, say, under 6 hours. # There are two cases where the number of hours = 0, these both have admit_type_cd=0, loc_facility_cd = 4382287. & ecntr_type_class_cd=393 df[df.diff_hours < 6]
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
The notebook Probe_encounter_types_classes explores admit type, class types & counts
plt.figure() df['diff_hours'].plot.hist(bins=500) plt.xlabel("Hospital Stay Duration, days") plt.title("Range of stays, patients with RRT") plt.xlim(0, 2000)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Group by facility We want to pull from similar patient populations
df.head() df.loc_desc.value_counts() grouped = df.groupby('loc_desc') grouped.describe()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Most number of results from 633867, or The Main Hospital
df.diff_hours.hist(by=df.loc_desc, bins=300) # Use locations 4382264, 4382273, 633867 plt.figure(figsize=(12, 6)) df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True) df[df['loc_facility_cd']=='4382264']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True) df[df['loc_facility_cd']=='4382273']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True) plt.xlabel('Hospital Stay Durations, days', fontsize=14) plt.ylabel('Normalized Frequency', fontsize=14) # plt.legend(['633867', '4382264', '4382273']) plt.legend(["Main Hospital", "Sattelite Hospital 1", "Sattelite Hospital 2"]) plt.tick_params(labelsize=14) plt.xlim(0, 1000)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Looks like these three locations (633867, 4382264, 4382273) have about the same distribution. Appropriate test to verify this: 2-sample Kolmogorov-Smirnov, if you're willing to compare pairwise...other tests? Wikipedia has a good article with references: https://en.wikipedia.org/wiki/Kolmogorov–Smirnov_test. Null hypothesis: the samples come from the same distribution. The null hypothesis is rejected if the test statistic is greater than the critical value (see wiki article)
from scipy.stats import ks_2samp ks_2samp(df[df['loc_facility_cd']=='633867']['diff_hours'],df[df['loc_facility_cd']=='4382264']['diff_hours']) # Critical test statistic at alpha = 0.05: = 1.36 * sqrt((n1+n2)/n1*n2) = 1.36*(sqrt((1775+582)/(1775*582)) = 0.065 # 0.074 > 0.065 -> null hypothesis rejected at level 0.05. --> histograms are different ks_2samp(df[df['loc_facility_cd']=='4382264']['diff_hours'], df[df['loc_facility_cd']=='4382273']['diff_hours']) # Critical test statistic at alpha = 0.05: = 1.36 * sqrt((n1+n2)/n1*n2) = 1.36*(sqrt((997+582)/(997*582)) = 0.071 # 0.05 !> 0.071 -> fail to reject null hypothesis at level 0.05. --> histograms are similar ks_2samp(df[df['loc_facility_cd']=='633867']['diff_hours'],df[df['loc_facility_cd']=='4382273']['diff_hours']) # Critical test statistic at alpha = 0.05: = 1.36 * sqrt((n1+n2)/n1*n2) = 1.36*(sqrt((1775+997)/(1775*997)) = 0.054 # 0.094 > 0.054 -> null hypothesis rejected at level 0.05. --> histograms are different; p-value indicates they're very different
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
From scipy documentation: "If the KS statistic is small or the p-value is high, then we cannot reject the hypothesis that the distributions of the two samples are the same" Null hypothesis: the distributions are the same. Looks like samples from 4382273 are different... plot that & 633867
plt.figure(figsize=(10,8)) df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.4, bins=500,normed=True) df[df['loc_facility_cd']=='4382273']['diff_hours'].plot.hist(alpha=0.4, bins=700,normed=True) plt.xlabel('Hospital Stay Durations, hours') plt.legend(['633867', '4382273']) plt.xlim(0, 1000)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Let's compare encounter duration histograms for patients with RRT & without RRT events, and see if there is a right subset of data to be selected for modeling (There is)
df.columns df.admit_src_desc.value_counts() df.enc_type_class_desc.value_counts() # vast majority are inpatient df.enc_type_desc.value_counts() df.admit_type_desc.value_counts()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Plot RRT & non-RRT with different codes
# For encounters without RRT events, from Main Hospital. # takes a while to run -- several minutes query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm , enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours , enc.reason_for_visit , enc.admit_type_cd , cv_admit_type.description as admit_type_desc , enc.encntr_type_cd , cv_enc_type.description as enc_type_desc , enc.encntr_type_class_cd , cv_enc_type_class.description as enc_type_class_desc , enc.admit_src_cd , cv_admit_src.description as admit_src_desc , enc.loc_facility_cd , cv_loc_fac.description as loc_desc FROM clinical_event ce INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id LEFT OUTER JOIN code_value cv_admit_type ON enc.admit_type_cd = cv_admit_type.code_value LEFT OUTER JOIN code_value cv_enc_type ON enc.encntr_type_cd = cv_enc_type.code_value LEFT OUTER JOIN code_value cv_enc_type_class ON enc.encntr_type_class_cd = cv_enc_type_class.code_value LEFT OUTER JOIN code_value cv_admit_src ON enc.admit_src_cd = cv_admit_src.code_value LEFT OUTER JOIN code_value cv_loc_fac ON enc.loc_facility_cd = cv_loc_fac.code_value LEFT OUTER JOIN ( SELECT ti.encntr_id AS encntr_id , MIN(tc.checkin_dt_tm) AS checkin_dt_tm FROM tracking_item ti JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id GROUP BY ti.encntr_id ) tci ON tci.encntr_id = enc.encntr_id WHERE enc.loc_facility_cd='633867' AND enc.encntr_id NOT IN ( SELECT DISTINCT ce.encntr_id FROM clinical_event ce WHERE ce.event_cd = '54411998' AND ce.result_status_cd NOT IN ('31', '36') AND ce.valid_until_dt_tm > 4e12 AND ce.event_class_cd not in ('654645') ) ;""" cur.execute(query) df_nrrt = as_pandas(cur) df_nrrt.describe() df_nrrt[~pd.isnull(df_nrrt['depart_dt_tm'])].count() # can't work with the nans in there... delete these rows print df_nrrt.shape df_nrrt = df_nrrt[~pd.isnull(df_nrrt['depart_dt_tm'])] df_nrrt = df_nrrt.reset_index(drop=True) print df_nrrt.shape plt.figure(figsize=(10,8)) df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.5, bins=500,normed=True) df_nrrt['diff_hours'].plot.hist(alpha=0.5, bins=900,normed=True) plt.xlabel('Stay Durations at Main Hospital [hours]') plt.legend(['RRT patients', 'Non-RRT patients']) plt.title('For all non-RRT patients') plt.xlim(0, 800) plt.figure(figsize=(10,8)) df[df['loc_facility_cd']=='633867']['diff_hours'][df.admit_type_cd != '0'].plot.hist(alpha=0.5, bins=500,normed=True) df_nrrt['diff_hours'][df_nrrt.admit_type_cd != '0'].plot.hist(alpha=0.5, bins=900,normed=True) plt.xlabel('Stay Durations at Main Hospital [hours]') plt.legend(['RRT patients', 'Non-RRT patients']) plt.title('For patients with admit_type_cd !=0') plt.xlim(0, 800) plt.figure(figsize=(10,8)) df[df['loc_facility_cd']=='633867']['diff_hours'][df.encntr_type_class_cd=='391'].plot.hist(alpha=0.5, bins=500,normed=True) df_nrrt['diff_hours'][df_nrrt.encntr_type_class_cd=='391'].plot.hist(alpha=0.5, bins=900,normed=True) plt.xlabel('Stay Durations at Main Hospital [hours]') plt.legend(['RRT patients', 'Non-RRT patients']) plt.title('For patients with encntr_type_class_cd=="391"') plt.xlim(0, 800) plt.figure(figsize=(10,8)) df[df['loc_facility_cd']=='633867']['diff_hours'][(df.encntr_type_class_cd=='391') & (df.admit_type_cd != '0')].plot.hist(alpha=0.5, bins=500,normed=True) df_nrrt['diff_hours'][(df_nrrt.encntr_type_class_cd=='391') & (df_nrrt.admit_type_cd != '0')].plot.hist(alpha=0.5, bins=1000,normed=True) plt.xlabel('Stay Durations at Main Hospital [hours]') plt.legend(['RRT patients', 'Non-RRT patients']) plt.title('For patients with encntr_type_class_cd=="391" & df.admit_type_cd != "0" ') plt.xlim(0, 800) df_nrrt.describe() # There are values of diff_hours that are negative. df_nrrt[df_nrrt.diff_hours<0].count() # But, there are no such values after we correct for encounter type class & admit type df_nrrt[(df_nrrt.encntr_type_class_cd=='391') & (df_nrrt.admit_type_cd != '0')][df_nrrt.diff_hours<0].count()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Softmax Classifier Sanity Check: Overfit Small Portion
script = """ source("breastcancer/softmax_clf.dml") as clf # Hyperparameters & Settings lr = 1e-2 # learning rate mu = 0.9 # momentum decay = 0.999 # learning rate decay constant batch_size = 32 epochs = 500 log_interval = 1 n = 200 # sample size for overfitting sanity check # Train [W, b] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], lr, mu, decay, batch_size, epochs, log_interval) """ outputs = ("W", "b") script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs) W, b = ml.execute(script).get(*outputs) W, b
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Train
script = """ source("breastcancer/softmax_clf.dml") as clf # Hyperparameters & Settings lr = 5e-7 # learning rate mu = 0.5 # momentum decay = 0.999 # learning rate decay constant batch_size = 32 epochs = 1 log_interval = 10 # Train [W, b] = clf::train(X, Y, X_val, Y_val, lr, mu, decay, batch_size, epochs, log_interval) """ outputs = ("W", "b") script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs) W, b = ml.execute(script).get(*outputs) W, b
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Eval
script = """ source("breastcancer/softmax_clf.dml") as clf # Eval probs = clf::predict(X, W, b) [loss, accuracy] = clf::eval(probs, Y) probs_val = clf::predict(X_val, W, b) [loss_val, accuracy_val] = clf::eval(probs_val, Y_val) """ outputs = ("loss", "accuracy", "loss_val", "accuracy_val") script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val, W=W, b=b).output(*outputs) loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs) loss, acc, loss_val, acc_val
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
LeNet-like ConvNet Sanity Check: Overfit Small Portion
script = """ source("breastcancer/convnet.dml") as clf # Hyperparameters & Settings lr = 1e-2 # learning rate mu = 0.9 # momentum decay = 0.999 # learning rate decay constant lambda = 0 #5e-04 batch_size = 32 epochs = 300 log_interval = 1 dir = "models/lenet-cnn/sanity/" n = 200 # sample size for overfitting sanity check # Train [Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir) """ outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2") script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size) .output(*outputs)) Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs) Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Hyperparameter Search
script = """ source("breastcancer/convnet.dml") as clf dir = "models/lenet-cnn/hyperparam-search/" # TODO: Fix `parfor` so that it can be efficiently used for hyperparameter tuning j = 1 while(j < 2) { #parfor(j in 1:10000, par=6) { # Hyperparameter Sampling & Settings lr = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # learning rate mu = as.scalar(rand(rows=1, cols=1, min=0.5, max=0.9)) # momentum decay = as.scalar(rand(rows=1, cols=1, min=0.9, max=1)) # learning rate decay constant lambda = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # regularization constant batch_size = 32 epochs = 1 log_interval = 10 trial_dir = dir + "j/" # Train [Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, trial_dir) # Eval #probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) #[loss, accuracy] = clf::eval(probs, Y) probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) [loss_val, accuracy_val] = clf::eval(probs_val, Y_val) # Save hyperparams str = "lr: " + lr + ", mu: " + mu + ", decay: " + decay + ", lambda: " + lambda + ", batch_size: " + batch_size name = dir + accuracy_val + "," + j #+","+accuracy+","+j write(str, name) j = j + 1 } """ script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size)) ml.execute(script)
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Train
ml.setStatistics(True) ml.setExplain(True) # sc.setLogLevel("OFF") script = """ source("breastcancer/convnet_distrib_sgd.dml") as clf # Hyperparameters & Settings lr = 0.00205 # learning rate mu = 0.632 # momentum decay = 0.99 # learning rate decay constant lambda = 0.00385 batch_size = 1 parallel_batches = 19 epochs = 1 log_interval = 1 dir = "models/lenet-cnn/train/" n = 50 #1216 # limit on number of samples (for debugging) X = X[1:n,] Y = Y[1:n,] X_val = X_val[1:n,] Y_val = Y_val[1:n,] # Train [Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, parallel_batches, epochs, log_interval, dir) """ outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2") script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size) .output(*outputs)) outs = ml.execute(script).get(*outputs) Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = outs Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 script = """ source("breastcancer/convnet_distrib_sgd.dml") as clf # Hyperparameters & Settings lr = 0.00205 # learning rate mu = 0.632 # momentum decay = 0.99 # learning rate decay constant lambda = 0.00385 batch_size = 1 parallel_batches = 19 epochs = 1 log_interval = 1 dir = "models/lenet-cnn/train/" # Dummy data [X, Y, C, Hin, Win] = clf::generate_dummy_data(50) #1216) [X_val, Y_val, C, Hin, Win] = clf::generate_dummy_data(100) # Train [Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, parallel_batches, epochs, log_interval, dir) """ outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2") script = dml(script).output(*outputs) outs = ml.execute(script).get(*outputs) Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = outs Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Eval
script = """ source("breastcancer/convnet_distrib_sgd.dml") as clf # Eval probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) [loss, accuracy] = clf::eval(probs, Y) probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) [loss_val, accuracy_val] = clf::eval(probs_val, Y_val) """ outputs = ("loss", "accuracy", "loss_val", "accuracy_val") script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size, Wc1=Wc1, bc1=bc1, Wc2=Wc2, bc2=bc2, Wc3=Wc3, bc3=bc3, Wa1=Wa1, ba1=ba1, Wa2=Wa2, ba2=ba2) .output(*outputs)) loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs) loss, acc, loss_val, acc_val script = """ source("breastcancer/convnet_distrib_sgd.dml") as clf # Dummy data [X, Y, C, Hin, Win] = clf::generate_dummy_data(1216) [X_val, Y_val, C, Hin, Win] = clf::generate_dummy_data(100) # Eval probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) [loss, accuracy] = clf::eval(probs, Y) probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) [loss_val, accuracy_val] = clf::eval(probs_val, Y_val) """ outputs = ("loss", "accuracy", "loss_val", "accuracy_val") script = (dml(script).input(Wc1=Wc1, bc1=bc1, Wc2=Wc2, bc2=bc2, Wc3=Wc3, bc3=bc3, Wa1=Wa1, ba1=ba1, Wa2=Wa2, ba2=ba2) .output(*outputs)) loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs) loss, acc, loss_val, acc_val
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
# script = """ # N = 102400 # num examples # C = 3 # num input channels # Hin = 256 # input height # Win = 256 # input width # X = rand(rows=N, cols=C*Hin*Win, pdf="normal") # """ # outputs = "X" # script = dml(script).output(*outputs) # thisX = ml.execute(script).get(*outputs) # thisX # script = """ # f = function(matrix[double] X) return(matrix[double] Y) { # while(FALSE){} # a = as.scalar(rand(rows=1, cols=1)) # Y = X * a # } # Y = f(X) # """ # outputs = "Y" # script = dml(script).input(X=thisX).output(*outputs) # thisY = ml.execute(script).get(*outputs) # thisY
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Create and fit Spark ML model
from pyspark.ml.classification import LogisticRegression from pyspark.ml.feature import VectorAssembler from pyspark.ml import Pipeline # Create feature vectors. Ignore arr_delay and it's derivate, is_late feature_assembler = VectorAssembler( inputCols=[x for x in training.columns if x not in ["is_late","arrdelay"]], outputCol="features") reg = LogisticRegression().setParams( maxIter = 100, labelCol="is_late", predictionCol="prediction") model = Pipeline(stages=[feature_assembler, reg]).fit(training)
spark/Logistic Regression Example.ipynb
zoltanctoth/bigdata-training
gpl-2.0
Predict whether the aircraft will be late
predicted = model.transform(test) predicted.take(1)
spark/Logistic Regression Example.ipynb
zoltanctoth/bigdata-training
gpl-2.0
Check model performance
predicted = predicted.withColumn("is_late",is_late(predicted.arrdelay)) predicted.crosstab("is_late","prediction").show()
spark/Logistic Regression Example.ipynb
zoltanctoth/bigdata-training
gpl-2.0
The data goes all the way back to 1967 and is updated weekly. Blaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression:
fred_ccsa.count()
notebooks/data/quandl.fred_ccsa/notebook.ipynb
quantopian/research_public
apache-2.0
Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame
unrate_df = odo(fred_ccsa, pd.DataFrame) unrate_df.plot(x='asof_date', y='value') plt.xlabel("As Of Date (asof_date)") plt.ylabel("Unemployment Claims") plt.title("United States Unemployment Claims") plt.legend().set_visible(False) unrate_recent = odo(fred_ccsa[fred_ccsa.asof_date >= '2002-01-01'], pd.DataFrame) unrate_recent.plot(x='asof_date', y='value') plt.xlabel("As Of Date (asof_date)") plt.ylabel("Unemployment Claims") plt.title("United States Unemployment Claims") plt.legend().set_visible(False)
notebooks/data/quandl.fred_ccsa/notebook.ipynb
quantopian/research_public
apache-2.0
Table of Contents Outer Join Operator CHAR datatype size increase Binary Data Type Boolean Data Type Synonyms for Data Types Function Synonymns Netezza Compatibility Select Enhancements Hexadecimal Functions Table Creation with Data <a id='outer'></a> Outer Join Operator Db2 allows the use of the Oracle outer-join operator when Oracle compatibility is turned on within a database. In Db2 11, the outer join operator is available by default and does not require the DBA to turn on Oracle compatibility. Db2 supports standard join syntax for LEFT and RIGHT OUTER JOINS. However, there is proprietary syntax used by Oracle employing a keyword: "(+)" to mark the "null-producing" column reference that precedes it in an implicit join notation. That is (+) appears in the WHERE clause and refers to a column of the inner table in a left outer join. For instance: <pre> SELECT * FROM T1, T2 WHERE T1.C1 = T2.C2 (+) </pre> Is the same as: <pre> SELECT * FROM T1 LEFT OUTER JOIN T2 ON T1.C1 = T2.C2 </pre> In this example, we get list of departments and their employees, as well as the names of departments who have no employees. This example uses the standard Db2 syntax.
%%sql SELECT DEPTNAME, LASTNAME FROM DEPARTMENT D LEFT OUTER JOIN EMPLOYEE E ON D.DEPTNO = E.WORKDEPT
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
TRANSLATE Function The translate function syntax in Db2 is: <pre> TRANSLATE(expression, to_string, from_string, padding) </pre> The TRANSLATE function returns a value in which one or more characters in a string expression might have been converted to other characters. The function converts all the characters in char-string-exp in from-string-exp to the corresponding characters in to-string-exp or, if no corresponding characters exist, to the pad character specified by padding. If no parameters are given to the function, the original string is converted to uppercase. In NPS mode, the translate syntax is: <pre> TRANSLATE(expression, from_string, to_string) </pre> If a character is found in the from string, and there is no corresponding character in the to string, it is removed. If it was using Db2 syntax, the padding character would be used instead. Note: If ORACLE compatibility is ON then the behavior of TRANSLATE is identical to NPS mode. This first example will uppercase the string.
%%sql SET SQL_COMPAT = 'NPS'; VALUES TRANSLATE('Hello');
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
OFFSET Extension The FETCH FIRST n ROWS ONLY clause can also include an OFFSET keyword. The OFFSET keyword allows you to retrieve the answer set after skipping "n" number of rows. The syntax of the OFFSET keyword is: <pre> OFFSET n ROWS FETCH FIRST x ROWS ONLY </pre> The OFFSET n ROWS must precede the FETCH FIRST x ROWS ONLY clause. The OFFSET clause can be used to scroll down an answer set without having to hold a cursor. For instance, you could have the first SELECT call request 10 rows by just using the FETCH FIRST clause. After that you could request the first 10 rows be skipped before retrieving the next 10 rows. The one thing you must be aware of is that that answer set could change between calls if you use this technique of a "moving" window. If rows are updated or added after your initial query you may get different results. This is due to the way that Db2 adds rows to a table. If there is a DELETE and then an INSERT, the INSERTed row may end up in the empty slot. There is no guarantee of the order of retrieval. For this reason you are better off using an ORDER by to force the ordering although this too won't always prevent rows changing positions. Here are the first 10 rows of the employee table (not ordered).
%%sql SELECT LASTNAME FROM EMPLOYEE FETCH FIRST 10 ROWS ONLY
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id="create"><a/> Table Creation Extensions The CREATE TABLE statement can now use a SELECT clause to generate the definition and LOAD the data at the same time. Create Table Syntax The syntax of the CREATE table statement has been extended with the AS (SELECT ...) WITH DATA clause: <pre> CREATE TABLE <name> AS (SELECT ...) [ WITH DATA | DEFINITION ONLY ] </pre> The table definition will be generated based on the SQL statement that you specify. The column names are derived from the columns that are in the SELECT list and can only be changed by specifying the columns names as part of the table name: EMP(X,Y,Z,...) AS (...). For example, the following SQL will fail because a column list was not provided:
%sql -q DROP TABLE AS_EMP %sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS FROM EMPLOYEE) DEFINITION ONLY;
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
A growing collection of tasks is readily available in pyannote.audio.tasks...
from pyannote.audio.tasks import __all__ as TASKS; print('\n'.join(TASKS))
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
... but you will eventually want to use pyannote.audio to address a different task. In this example, we will add a new task addressing the sound event detection problem. Problem specification A problem is expected to be solved by a model $f$ that takes an audio chunk $X$ as input and returns its predicted solution $\hat{y} = f(X)$. Resolution Depending on the addressed problem, you might expect the model to output just one prediction for the whole audio chunk (Resolution.CHUNK) or a temporal sequence of predictions (Resolution.FRAME). In our particular case, we would like the model to provide one decision for the whole chunk:
from pyannote.audio.core.task import Resolution resolution = Resolution.CHUNK
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
Type of problem Similarly, the type of your problem may fall into one of these generic machine learning categories: * Problem.BINARY_CLASSIFICATION for binary classification * Problem.MONO_LABEL_CLASSIFICATION for multi-class classification * Problem.MULTI_LABEL_CLASSIFICATION for multi-label classification * Problem.REGRESSION for regression * Problem.REPRESENTATION for representation learning In our particular case, we would like the model to do multi-label classification because one audio chunk may contain multiple sound events:
from pyannote.audio.core.task import Problem problem = Problem.MULTI_LABEL_CLASSIFICATION from pyannote.audio.core.task import Specifications specifications = Specifications( problem=problem, resolution=resolution, duration=5.0, classes=["Speech", "Dog", "Cat", "Alarm_bell_ringing", "Dishes", "Frying", "Blender", "Running_water", "Vacuum_cleaner", "Electric_shaver_toothbrush"], )
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
A task is expected to be solved by a model $f$ that (usually) takes an audio chunk $X$ as input and returns its predicted solution $\hat{y} = f(X)$. To help training the model $f$, the task $\mathcal{T}$ is in charge of - generating $(X, y)$ training samples using the dataset - defining the loss function $\mathcal{L}(y, \hat{y})$
from typing import Optional import torch import torch.nn as nn import numpy as np from pyannote.core import Annotation from pyannote.audio import Model from pyannote.audio.core.task import Task, Resolution # Your custom task must be a subclass of `pyannote.audio.core.task.Task` class SoundEventDetection(Task): """Sound event detection""" def __init__( self, protocol: Protocol, duration: float = 5.0, warm_up: Union[float, Tuple[float, float]] = 0.0, batch_size: int = 32, num_workers: int = None, pin_memory: bool = False, augmentation: BaseWaveformTransform = None, **other_params, ): super().__init__( protocol, duration=duration, min_duration=min_duration, warm_up=warm_up, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory, augmentation=augmentation, ) def setup(self, stage=None): if stage == "fit": # load metadata for training subset self.train_metadata_ = list() for training_file in self.protocol.train(): self.training_metadata_.append({ # path to audio file (str) "audio": training_file["audio"], # duration of audio file (float) "duration": training_file["duration"], # reference annotation (pyannote.core.Annotation) "annotation": training_file["annotation"], }) # gather the list of classes classes = set() for training_file in self.train_metadata_: classes.update(training_file["reference"].labels()) classes = sorted(classes) # specify the addressed problem self.specifications = Specifications( # it is a multi-label classification problem problem=Problem.MULTI_LABEL_CLASSIFICATION, # we expect the model to output one prediction # for the whole chunk resolution=Resolution.CHUNK, # the model will ingest chunks with that duration (in seconds) duration=self.duration, # human-readable names of classes classes=classes) # `has_validation` is True iff protocol defines a development set if not self.has_validation: return # load metadata for validation subset self.validation_metadata_ = list() for validation_file in self.protocol.development(): self.validation_metadata_.append({ "audio": validation_file["audio"], "num_samples": math.floor(validation_file["duration"] / self.duration), "annotation": validation_file["annotation"], }) def train__iter__(self): # this method generates training samples, one at a time, "ad infinitum". each worker # of the dataloader will run it, independently from other workers. pyannote.audio and # pytorch-lightning will take care of making batches out of it. # create worker-specific random number generator (RNG) to avoid this common bug: # tanelp.github.io/posts/a-bug-that-plagues-thousands-of-open-source-ml-projects/ rng = create_rng_for_worker(self.model.current_epoch) # load list and number of classes classes = self.specifications.classes num_classes = len(classes) # yield training samples "ad infinitum" while True: # select training file at random random_training_file, *_ = rng.choices(self.train_metadata_, k=1) # select one chunk at random random_start_time = rng.uniform(0, random_training_file["duration"] - self.duration) random_chunk = Segment(random_start_time, random_start_time + self.duration) # load audio excerpt corresponding to random chunk X = self.model.audio.crop(random_training_file["audio"], random_chunk, fixed=self.duration) # load labels corresponding to random chunk as {0|1} numpy array # y[k] = 1 means that kth class is active y = np.zeros((num_classes,)) active_classes = random_training_file["annotation"].crop(random_chunk).labels() for active_class in active_classes: y[classes.index(active_class)] = 1 # yield training samples as a dict (use 'X' for input and 'y' for target) yield {'X': X, 'y': y} def train__len__(self): # since train__iter__ runs "ad infinitum", we need a way to define what an epoch is. # this is the purpose of this method. it outputs the number of training samples that # make an epoch. # we compute this number as the total duration of the training set divided by # duration of training chunks. we make sure that an epoch is at least one batch long, # or pytorch-lightning will complain train_duration = sum(training_file["duration"] for training_file in self.train_metadata_) return max(self.batch_size, math.ceil(train_duration / self.duration)) def val__getitem__(self, sample_idx): # load list and number of classes classes = self.specifications.classes num_classes = len(classes) # find which part of the validation set corresponds to sample_idx num_samples = np.cumsum([ validation_file["num_samples"] for validation_file in self.validation_metadata_]) file_idx = np.where(num_samples < sample_idx)[0][0] validation_file = self.validation_metadata_[file_idx] idx = sample_idx - (num_samples[file_idx] - validation_file["num_samples"]) chunk = SlidingWindow(start=0., duration=self.duration, step=self.duration)[idx] # load audio excerpt corresponding to current chunk X = self.model.audio.crop(validation_file["audio"], chunk, fixed=self.duration) # load labels corresponding to random chunk as {0|1} numpy array # y[k] = 1 means that kth class is active y = np.zeros((num_classes,)) active_classes = validaiton_file["annotation"].crop(chunk).labels() for active_class in active_classes: y[classes.index(active_class)] = 1 return {'X': X, 'y': y} def val__len__(self): return sum(validation_file["num_samples"] for validation_file in self.validation_metadata_) # `pyannote.audio.core.task.Task` base class provides a `LightningModule.training_step` and # `LightningModule.validation_step` methods that rely on self.specifications to guess which # loss and metrics should be used. you can obviously choose to customize them. # More details can be found in pytorch-lightning documentation and in # pyannote.audio.core.task.Task source code. # def training_step(self, batch, batch_idx: int): # return loss # def validation_step(self, batch, batch_idx: int): # return metric # pyannote.audio.tasks.segmentation.mixin also provides a convenient mixin # for "segmentation" tasks (ie. with Resolution.FRAME) that already defines # a bunch of useful methods.
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
Вы могли заметить, что мы нигде не объявили тип переменных a, b и c. В Python этого делать не надо. Язык сам выберет тип по значению, которое вы положили в переменную. Для переменной a это тип int (целое число). Для b&nbsp;— str (строка). Для c&nbsp;— float (вещественное число). В ближайшем будущем вы скорее всего познакомитесь с такими типами: | Тип | Python | Аналог в C++ | Аналог в Pascal | | --- | --- | --- | --- | | Целое число | int | int | Integer | | Вещественное число | float | double | Double | | Строка | str | std::string | String | | Логический | bool | bool | Boolean | | Массив | list | std::vector&lt;&gt; | Array | | Множество | set | std::set&lt;&gt; | нет | | Словарь | dict | std::map&lt;&gt; | нет | Тип переменной можно узнать с помощью функции type:
a = 5.0 s = "LKSH students are awesome =^_^=" print(type(a)) print(type(b))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Параллельное присваивание В Python можно присвоит значения сразу нескольким переменным:
a, b = 3, 5 print(a) print(b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
При этом Python сначала вычисляет все значения справа, а потом уже присваивает вычисленные значения переменным слева:
a = 3 b = 5 a, b = b, a + b print(a) print(b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Это позволяет, например, поменять значения двух переменных в одну строку:
a = "apple" b = "banana" a, b = b, a print(a) print(b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Ввод-вывод Как вы уже видели, для вывода на экран в Python есть функция print. Ей можно передавать несколько значений через запятую — они будут выведены в одной строке через пробел:
a = 2 b = 3 print(a, "+", b, "=", a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Для ввода с клавиатуры есть функция input. Она считывает одну строку целиком:
a = input() b = input() print(a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Ага, что-то пошло не так! Мы получили 23 вместо 5. Так произошло, потому что input() возращает строку (str), а не число (int). Чтобы это исправить нам надо явно преобразовать результат функции input() к типу int.
a = int(input()) b = int(input()) print(a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Так-то лучше :) Частая ошибка — забыть внутренние скобки после функции input. Давайте посмотрим, что в этом случае произойдёт:
a = int(input) b = int(input) print(a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Эту ошибку можно перевести с английского так: ОшибкаТипа: аргумент функции int() должен быть строкой, последовательностью байтов или числом, а не функцией Теперь вы знаете что делать, если получите такую ошибку ;) Арифметические операции Давайте научимся складывать, умножать, вычитать и производить другие операции с целыми (тип int) или вещественными (тип float) числами. Стоит понимать, что целое число и вещественное число — это совсем разные вещи для компьютера. Список основных бинарных (требующих 2 переменных) арифметических действий, которые вам понадобятся: | Действие | Обозначение в Python | Аналог в C++ | Аналог в Pascal | Приоритет | | --- | --- | --- | --- | --- | | Сложение | a + b | a + b | a + b | 3 | | Вычитание | a - b | a - b | a - b | 3 | | Умножение | a * b | a * b | a * b | 2 | | Вещественное деление | a / b | a / b | a / b | 2 | | Целочисленное деление (с округлением вниз) | a // b | a / b | a div b | 2 | | Остаток от деления | a % b | a % b | a mod b | 2 | | Возведение в степень | a ** b | pow(a, b) | power(a, b) | 1 | Сложение, вычитание и умножение работают точно также, как и в других языках:
print(11 + 7, 11 - 7, 11 * 7, (2 + 9) * (12 - 5))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Вещественное деление всегда даёт вещественное число (float) в результате, независимо от аргументов (если делитель не 0):
print(12 / 8, 12 / 4, 12 / -7)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Результат целочисленного деления — это результат вещественного деления, округлённый до ближайшего меньшего целого:
print(12 // 8, 12 // 4, 12 // -7)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Остаток от деления — это то что осталось от числа после целочисленного деления. Если c = a // b, то a можно представить в виде a = c * b + r. В этом случае r — это остаток от деления. Пример: a = 20, b = 8, c = a // b = 2. Тогда a = c * b + r превратится в 20 = 2 * 8 + 4. Остаток от деления — 4.
print(12 % 8, 12 % 4, 12 % -7)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Возведение a в степень b — это перемножение a на само себя b раз. В математике обозначается как $a^b$.
print(5 ** 2, 2 ** 4, 13 ** 0)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Возведение в степень работает для вещественных a и отрицательных b. Число в отрицательной степени — это единица делённое на то же число в положительной степени: $a^{-b} = \frac{1}{a^b}$
print(2.5 ** 2, 2 ** -3)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Давайте посмотрим что получится, если возвести в большую степень целое число:
print(5 ** 100)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В отличии от C++ или Pascal, Python правильно считает результат, даже если в результате получается очень большое число. А что если возвести вещественное число в большую степень?
print(5.0 ** 100)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Запись вида &lt;число&gt;e&lt;степень&gt; — это другой способ записать $\text{<число>} \cdot 10^\text{<степень>}$. То есть: $$\text{7.888609052210118e+69} = 7.888609052210118 \cdot 10^{69}$$ а это то же самое, что и 7888609052210118000000000000000000000000000000000000000000000000000000. Этот результат не настолько точен, как предыдущий. Так происходит потому, что для хранения каждого вещественного числа Python использует фиксированное количество памяти, а значит может хранить число только с ограниченной точностью. Возведение в степень также работает и для вещественной степени. Например $\sqrt{a} = a^\frac{1}{2} = a^{0.5}$
print(2 ** 0.5, 9 ** 0.5)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В школе вам, наверное, рассказывали, что квадратный корень нельзя извлекать из отрицательных чисел. С++ и Pascal при попытке сделать это выдадут ошибку. Давайте посмотрим, что сделает Python:
print((-4) ** 0.5)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В общем, это не совсем правда. Извлекать квадратный корень из отрицательных чисел, всё-таки, можно, но в результате получится не вещественное, а так называемое комплексное число. Если вы получили страшную такую штуку в своей программе, скорее всего ваш код взял корень из отрицательного числа, а значит вам надо искать в нём ошибку. В ближайшее время вам нет необходимости что-то знать про комплексные числа. Арифметические выражения Естественно, как и во многих других языках программирования, вы можете составлять большие выражения из переменных, чисел, арифметических операций и скобок. Например:
a = 4 b = 11 c = (a ** 2 + b * 3) / (9 - b % (a + 1)) print(c)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В примере выше переменной c присвоено значение выражения $$\frac{a^2 + b \cdot 3}{9 - b \text{ mod } (a + 1)}$$ При отсутствии скобок арфиметические операции в выражении вычисляются в порядке приоритета (см. таблицу выше). Сначала выполняются операции с приоритетом 1, потом с приоритетом 2 и т.д. При одинаковом приоритете вычисление происходит слева направо. Вы можете использовать скобки, чтобы менять порядок вычисления.
print(2 * 2 + 2) print(2 * (2 + 2))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Преобразование типов Если у вас есть значение одного типа, то вы можете преобразовать его к другому типу, вызвав функцию с таким же именем:
a = "-15" print(a, int(a), float(a))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Больше примеров:
# a_int, b_float, c_str - это просто имена переменных. # Они так названы, чтобы было проще разобраться, где какое значение лежит. a_int = 3 b_float = 5.0 c_str = "10" print(a_int, b_float, c_str) # При попытке сложить без преобразования мы получили бы ошибку, потому что Python # не умеет складывать числа со строками. print("a_int + int(c_str) =", a_int + int(c_str)) print("str(a_int) + str(b_float) =", str(a_int) + str(b_float)) print("float(c_str) =", float(c_str))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
梯度提升树(Gradient Boosted Trees):模型理解 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/estimator/boosted_trees_model_understanding"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a> </td> <td> <img><a>在 GitHub 上查看源代码</a> </td> <td> <img><a>下载笔记本</a> </td> </table> 警告:不建议将 Estimator 用于新代码。Estimator 运行 v1.Session 风格的代码,此类代码更加难以正确编写,并且可能会出现意外行为,尤其是与 TF 2 代码结合使用时。Estimator 确实在我们的兼容性保证范围内,但除了安全漏洞之外不会得到任何修复。请参阅迁移指南以了解详情。 注:TensorFlow Decision Forests 中提供了许多最先进决策森林算法基于现代 Keras 的实现。 对于梯度提升模型(Gradient Boosting model)的端到端演示(end-to-end walkthrough),请查阅在 Tensorflow 中训练提升树(Boosted Trees)模型。在本教程中,您将: 学习如何对提升树模型进行局部和全局解释 直观地了解提升树模型如何拟合数据集 如何对提升树模型(Boosted Trees model)进行局部解释和全局解释 局部可解释性指模型的预测在单一样本层面的理解程度,而全局可解释性指模型作为一个整体的理解能力。这种技术可以帮助机器学习 (ML) 从业者在模型开发阶段检测偏差和错误。 对于局部可解释性,您将了解到如何创造并可视化每个实例(per-instance)的贡献度。区别于特征重要性,这种贡献被称为 DFCs(定向特征贡献,directional feature contributions)。 对于全局可解释性,您将检索并呈现基于增益的特征重要性、排列特征重要性,显示汇总的 DFC。 加载泰坦尼克数据集(titanic) 本教程使用泰坦尼克数据集,旨在已知乘客的性别,年龄和客舱等级等特征的情况下预测的存活率。
!pip install statsmodels import numpy as np import pandas as pd from IPython.display import clear_output # Load dataset. dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv') dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv') y_train = dftrain.pop('survived') y_eval = dfeval.pop('survived') import tensorflow as tf tf.random.set_seed(123)
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
有关特征的描述,请参阅之前的教程。 创建特征列, 输入函数并训练 estimator 数据预处理 特征处理,使用原始的数值特征和独热编码(one-hot-encoding)处理过的非数值特征(如性别,舱位)别建立数据集。
fc = tf.feature_column CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck', 'embark_town', 'alone'] NUMERIC_COLUMNS = ['age', 'fare'] def one_hot_cat_column(feature_name, vocab): return fc.indicator_column( fc.categorical_column_with_vocabulary_list(feature_name, vocab)) feature_columns = [] for feature_name in CATEGORICAL_COLUMNS: # Need to one-hot encode categorical features. vocabulary = dftrain[feature_name].unique() feature_columns.append(one_hot_cat_column(feature_name, vocabulary)) for feature_name in NUMERIC_COLUMNS: feature_columns.append(fc.numeric_column(feature_name, dtype=tf.float32))
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0