markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Add a column
responses=househld[['REGION','WTFA_HH']].groupby('REGION').count() responses.name = "Responses" by_region['Responses']=responses by_region
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
And we will change the index to a more complex one, based on the documentation of the household file.
by_region.index=['Northeast','Midwest','South','West'] by_region
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Saving this result We can use any of the to_xyz() functions to save this data to a file. Here we don't supply a path to save the data, which in turn just returns the result in the requested format.
print(by_region.to_json())
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Dealing with missing values It appears that the household file also holds information about why people did not respond. This field is empty if people responded. We are going to use that to filter the data, with a boolean index. We will use the NON_INTV response code to create the boolean index
non_response_code=househld['NON_INTV'] import math # If the value Is Not A Number math.isnan() will return True. responded=[math.isnan(x) for x in non_response_code] notresponded=[not math.isnan(x) for x in non_response_code] resp=househld[responded] nonresp=househld[notresponded] print("Total size: {}".format(hou...
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Now we create a group by the reason code, why people did not respond
non_intv_group=nonresp.groupby('NON_INTV') non_intv_group.size()
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Filling missing data If we just plot the data from the original DataFrame, we only get the data with a value. We can use the fillna() function to solve that and see all data.
househld['INTV_MON'].hist(by=househld['NON_INTV'].fillna(0))
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Neural nets All nets inherit from sklearn.BaseEstimator and have the same interface as another wrappers in REP (details see in 01-howto-Classifiers) All of these nets libraries support: classification multi-classification regression multi-target regresssion additional fitting (using partial_fit method) and don't supp...
variables = list(data.columns[:25])
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Simple training
tn = TheanetsClassifier(features=variables, layers=[20], trainers=[{'optimize': 'nag', 'learning_rate': 0.1}]) tn.fit(train_data, train_labels)
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Predicting probabilities, measuring the quality
# predict probabilities for each class prob = tn.predict_proba(test_data) print prob print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Theanets multistage training In some cases we need to continue training: i.e., we have new data or current trainer is not efficient anymore. For this purpose there is partial_fit method, where you can continue training using different trainer or different data.
tn = TheanetsClassifier(features=variables, layers=[10, 10], trainers=[{'optimize': 'rprop'}]) tn.fit(train_data, train_labels) print('training complete')
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Second stage of fitting
tn.partial_fit(train_data, train_labels, **{'optimize': 'adadelta'}) # predict probabilities for each class prob = tn.predict_proba(test_data) print prob print 'ROC AUC', roc_auc_score(test_labels, prob[:, 1])
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Let's train network using Rprop algorithm
import neurolab nl = NeurolabClassifier(features=variables, layers=[10], epochs=40, trainf=neurolab.train.train_rprop) nl.fit(train_data, train_labels) print('training complete')
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Pybrain
from rep.estimators import PyBrainClassifier print PyBrainClassifier.__doc__ pb = PyBrainClassifier(features=variables, layers=[10, 2], hiddenclass=['TanhLayer', 'SigmoidLayer']) pb.fit(train_data, train_labels) print('training complete')
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Advantages of common interface Let's build an ensemble of neural networks. This will be done by bagging meta-algorithm Bagging over Theanets classifier (same can be done with any neural network) in practice, one will need many networks to get predictions better, then obtained by one network
from sklearn.ensemble import BaggingClassifier base_tn = TheanetsClassifier(layers=[20], trainers=[{'min_improvement': 0.01}]) bagging_tn = BaggingClassifier(base_estimator=base_tn, n_estimators=3) bagging_tn.fit(train_data[variables], train_labels) print('training complete') prob = bagging_tn.predict_proba(test_data...
howto/06-howto-neural-nets.ipynb
scr4t/rep
apache-2.0
Gaussian Processes model for functions/continuous output for new input returns predicted output and uncertainty
display(Image(filename="GP_uq.png", width=630)) #source: http://scikit-learn.org/0.17/modules/gaussian_process.html
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Support Vector Machines model for classification map data nonlinearly to higher dimensionsal space separate points of different classes using a plane (i.e. linearly)
display(Image(filename="SVM.png", width=700)) #source: https://en.wikipedia.org/wiki/Support_vector_machine
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Feature engineering and two classification algorithms Feature engineering in Machine Learning feature engineering: map data to features with function $\FM:\IS\to \RKHS$ handle nonlinear relations with linear methods ($\FM$ nonlinear) handle non-numerical data (e.g. text)
display(Image(filename="monomials_small.jpg", width=800)) #source: Berhard Schölkopf
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Working in Feature Space want Feature Space $\RKHS$ (the codomain of $\FM$) to be vector space to get nice mathematical structure definition of inner products induces norms and possibility to measure angles can use linear algebra in $\RKHS$ to solve ML problems inner products angles norms distances induces nonlinear...
figkw = {"figsize":(4,4), "dpi":150} np.random.seed(5) samps_per_distr = 20 data = np.vstack([stats.multivariate_normal(np.array([-2,0]), np.eye(2)*1.5).rvs(samps_per_distr), stats.multivariate_normal(np.array([2,0]), np.eye(2)*1.5).rvs(samps_per_distr)]) distr_idx = np.r_[[0]*samps_per_distr, [1]*sam...
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Classification using inner products in Feature Space compute mean feature space embedding $$\mu_{0} = \frac{1}{N_0} \sum_{l_i = 0} \FM(x_i) ~~~~~~~~ \mu_{1} = \frac{1}{N_1} \sum_{l_i = 1} \FM(x_i)$$ assign test point to most similar class in terms of inner product between point and mean embedding $\prodDot{\FM(x)}{\mu...
pl.figure(**figkw) for (idx, c, marker) in [(0,'r', (0,3,0)), (1, "b", "x")]: pl.scatter(*data[distr_idx==idx,:].T, c=c, marker=marker, alpha=0.2) pl.arrow(0, 0, *data[distr_idx==idx,:].mean(0), head_width=0.3, width=0.05, head_length=0.3, fc=c, ec=c) pl.title(r"Mean embeddings for $\Phi(x)=x$"); pl.figure(**f...
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Classification using density estimation estimate density for each class by centering a gaussian, taking mixture as estimate $$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$
# Some plotting code def apply_to_mg(func, *mg): #apply a function to points on a meshgrid x = np.vstack([e.flat for e in mg]).T return np.array([func(i.reshape((1,2))) for i in x]).reshape(mg[0].shape) def plot_with_contour(samps, data_idx, cont_func, method_name = None, delta = 0.025, pl = pl, colormesh...
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Classification using density estimation estimate density for each class by centering a gaussian, taking mixture as estimate $$\widehat{p}0 = \frac{1}{N_0} \sum{l_i = 0} \mathcal{N}(\cdot; x_i,\Sigma) ~~~~~~~~ \widehat{p}1 = \frac{1}{N_1} \sum{l_i = 1} \mathcal{N}(\cdot; x_i,\Sigma)$$ assign test point $x$ to class...
class KMEclassification(object): def __init__(self, samps1, samps2, kernel): self.de1 = ro.RKHSDensityEstimator(samps1, kernel, 0.1) self.de2 = ro.RKHSDensityEstimator(samps2, kernel, 0.1) def classification_score(self, test): return (self.de1.eval_kme(test) - self.de2.eval_kme(test...
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Applications Kernel mean embedding mean feature with canonical feature map $\frac{1}{N} \sum_{i = 1}^N \FM(x_i) = \frac{1}{N} \sum_{i = 1}^N \PDK(x_i, \cdot)$ this the estimate of the kernel mean embedding of the distribution/density $\rho$ of $x_i$ $$\mu_\rho(\cdot) = \int \PDK(x,\cdot) \mathrm{d}\rho(x)$$ usin...
out_samps = data[distr_idx==0,:1] + 1 inp_samps = data[distr_idx==0,1:] + 1 def plot_mean_embedding(cme, inp_samps, out_samps, p1 = 0., p2 = 1., offset = 0.5): x = np.linspace(inp_samps.min()-offset,inp_samps.max()+offset,200) fig = pl.figure(figsize=(10, 5)) ax = [pl.subplot2grid((2, 2), (0, 1)), ...
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
Conditional mean embedding (3) closed form estimate given samples from input and output $$\begin{bmatrix}\PDK_Y(y_1, \cdot),& \dots &, \PDK_Y(y_N, \cdot)\end{bmatrix} \Gram_X^{-1} \begin{bmatrix}\PDK_X(x_1, \cdot)\ \vdots \ \PDK_X(x_N, \cdot)\end{bmatrix}$$ closed form estimate of output embedding for new input $x^...
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/MpzaCCbX-z4?rel=0&amp;showinfo=0&amp;start=148" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>') display(Image(filename="Pendulum_eigenfunctions.png", width=700)) display(Image(filename="KeywordClustering.png", widt...
Tutorial_on_modern_kernel_methods.ipynb
ingmarschuster/rkhs_demo
gpl-3.0
In this example you will learn how to make use of the periodicity of the electrodes. As seen in TB 4 the transmission calculation takes a considerable amount of time. In this example we will redo the same calculation, but speed it up (no approximations made). A large computational effort is made on calculating the self...
graphene = sisl.geom.graphene(orthogonal=True)
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Note the below lines are differing from the same lines in TB 4, i.e. we save the electrode electronic structure without extending it 25 times.
H_elec = sisl.Hamiltonian(graphene) H_elec.construct(([0.1, 1.43], [0., -2.7])) H_elec.write('ELEC.nc')
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
See TB 2 for details on why we choose repeat/tile on the Hamiltonian object and not on the geometry, prior to construction.
H = H_elec.repeat(25, axis=0).tile(15, axis=1) H = H.remove( H.geometry.close( H.geometry.center(what='cell'), R=10.) ) dangling = [ia for ia in H.geometry.close(H.geometry.center(what='cell'), R=14.) if len(H.edges(ia)) < 3] H = H.remove(dangling) edge = [ia for ia in H.geometry.close(H.ge...
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
Exercises Instead of analysing the same thing as in TB 4 you should perform the following actions to explore the available data-analysis capabilities of TBtrans. Please note the difference in run-time between example 04 and this example. Always use Bloch's theorem when applicable! HINT please copy as much as you like f...
tbt = sisl.get_sile('siesta.TBT.nc') # Easier manipulation of the geometry geom = tbt.geometry a_dev = tbt.a_dev # the indices where we have DOS # Extract the DOS, per orbital (hence sum=False) DOS = tbt.ADOS(0, sum=False) # Normalize DOS for plotting (maximum size == 400) # This array has *all* energy points and orbit...
TB_05/run.ipynb
zerothi/ts-tbt-sisl-tutorial
gpl-3.0
We're first going to train a multinomial logistic regression using simple gradient descent. TensorFlow works like this: * First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description...
# With gradient descent training, even this much (10000) data is prohibitive. # Subset the training data for faster turnaround. train_subset = 10000 #10000 graph = tf.Graph() with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. t...
google_dl_udacity/lesson3/2_fullyconnected.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
results lesson 1 sklearn LogisticRegression 50 training samples: LogisticRegression score: 0.608200 100 training samples: LogisticRegression score: 0.708200 1000 training samples: LogisticRegression score: 0.829200 5000 training samples: LogisticRegression score: 0.846200 tensor flow results above 50: 43.3% 100: 53....
batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_tra...
google_dl_udacity/lesson3/2_fullyconnected.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Demons Registration This function will align the fixed and moving images using the Demons registration method. If given a mask, the similarity metric will be evaluated using points sampled inside the mask. If given fixed and moving points the similarity metric value and the target registration errors will be displayed ...
def demons_registration(fixed_image, moving_image, fixed_points = None, moving_points = None): registration_method = sitk.ImageRegistrationMethod() # Create initial identity transformation. transform_to_displacment_field_filter = sitk.TransformToDisplacementFieldFilter() transform_to_displacment_f...
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Running the Demons registration on this data will <font color="red">take a long time</font> (run it before going home). If you are less interested in accuracy you can switch the optimizer from conjugate gradient to gradient, will run much faster but the results are worse.
#%%timeit -r1 -n1 # Uncomment the line above if you want to time the running of this cell. # Select the fixed and moving images, valid entries are in [0,9] fixed_image_index = 0 moving_image_index = 7 tx = demons_registration(fixed_image = images[fixed_image_index], moving_image = images[mo...
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
SimpleITK also includes a set of Demons filters which are independent of the ImageRegistrationMethod. These include: 1. DemonsRegistrationFilter 2. DiffeomorphicDemonsRegistrationFilter 3. FastSymmetricForcesDemonsRegistrationFilter 4. SymmetricForcesDemonsRegistrationFilter As these filters are independent of the Ima...
def smooth_and_resample(image, shrink_factor, smoothing_sigma): """ Args: image: The image we want to resample. shrink_factor: A number greater than one, such that the new image's size is original_size/shrink_factor. smoothing_sigma: Sigma for Gaussian smoothing, this is in physical (ima...
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Now we will use our newly minted multiscale framework to perform registration with the Demons filters. Some things you can easily try out by editing the code below: 1. Is there really a need for multiscale - just call the multiscale_demons method without the shrink_factors and smoothing_sigmas parameters. 2. Which Demo...
# Define a simple callback which allows us to monitor the Demons filter's progress. def iteration_callback(filter): print('\r{0}: {1:.2f}'.format(filter.GetElapsedIterations(), filter.GetMetric()), end='') fixed_image_index = 0 moving_image_index = 7 # Select a Demons filter and configure it. demons_filter = sit...
66_Registration_Demons.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
A Slightly Bigger Word-Document Matrix The example word-document matrix is taken from http://makeyourowntextminingtoolkit.blogspot.co.uk/2016/11/so-many-dimensions-and-how-to-reduce.html but expanded to cover a 3rd topic related to a home or house
# create a simple word-document matrix as a pandas dataframe, the content values have been normalised words = ['wheel', ' seat', ' engine', ' slice', ' oven', ' boil', 'door', 'kitchen', 'roof'] print(words) documents = ['doc1', 'doc2', 'doc3', 'doc4', 'doc5', 'doc6', 'doc7', 'doc8', 'doc9'] word_doc = pandas.DataFrame...
A03_svd_applied_to_slightly_bigger_word_document_matrix.ipynb
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit
gpl-2.0
Yes, that worked .. the reconstructed A2 is the same as the original A (within the bounds of small floating point accuracy) Now Reduce Dimensions, Extract Topics Here we use only the top 3 values of the S singular value matrix, pretty brutal reduction in dimensions! Why 3, and not 2? We'll only plot 2 dimensions for th...
# S_reduced is the same as S but with only the top 3 elements kept S_reduced = numpy.zeros_like(S) # only keep top two eigenvalues l = 3 S_reduced[:l, :l] = S[:l,:l] # show S_rediced which has less info than original S print("S_reduced =\n", numpy.round(S_reduced, decimals=2))
A03_svd_applied_to_slightly_bigger_word_document_matrix.ipynb
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit
gpl-2.0
The above shows that there are indeed 3 clusters of documents. That matches our expectations as we constructed the example data set that way. Topics from New View of Words
# topics are a linear combination of original words U_S_reduced = numpy.dot(U, S_reduced) df = pandas.DataFrame(numpy.round(U_S_reduced, decimals=2), index=words) # show colour coded so it is easier to see significant word contributions to a topic df.style.background_gradient(cmap=plt.get_cmap('Blues'), low=0, high=2)
A03_svd_applied_to_slightly_bigger_word_document_matrix.ipynb
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit
gpl-2.0
Operations on Tensors Variables and Constants Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable). Constant values can not be changed, while variables values can be. The main difference is that instances of tf.Variable have methods allowing us to change their values while tensors construc...
x = tf.constant([2, 3, 4]) x x = tf.Variable(2.0, dtype=tf.float32, name='my_variable') x.assign(45.8) # TODO 1 x x.assign_add(4) # TODO 2 x x.assign_sub(3) # TODO 3 x
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Point-wise operations Tensorflow offers similar point-wise tensor operations as numpy does: tf.add allows to add the components of a tensor tf.multiply allows us to multiply the components of a tensor tf.subtract allow us to substract the components of a tensor tf.math.* contains the usual math operations to be appli...
a = tf.constant([5, 3, 8]) # TODO 1 b = tf.constant([3, -1, 2]) c = tf.add(a, b) d = a + b print("c:", c) print("d:", d) a = tf.constant([5, 3, 8]) # TODO 2 b = tf.constant([3, -1, 2]) c = tf.multiply(a, b) d = a * b print("c:", c) print("d:", d) # tf.math.exp expects floats so we need to explicitly give the type a...
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
NumPy Interoperability In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
# native python list a_py = [1, 2] b_py = [3, 4] tf.add(a_py, b_py) # TODO 1 # numpy arrays a_np = np.array([1, 2]) b_np = np.array([3, 4]) tf.add(a_np, b_np) # TODO 2 # native TF tensor a_tf = tf.constant([1, 2]) b_tf = tf.constant([3, 4]) tf.add(a_tf, b_tf) # TODO 3
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Gradient Function To use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to! During gradient descent we think of the loss as a function...
# TODO 1 def compute_gradients(X, Y, w0, w1): with tf.GradientTape() as tape: loss = loss_mse(X, Y, w0, w1) return tape.gradient(loss, [w0, w1]) w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) dw0, dw1 = compute_gradients(X, Y, w0, w1) print("dw0:", dw0.numpy()) print("dw1", dw1.numpy())
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Quick numbers: # RRT events & total # encounters (for the main hospital) For all patient & location types
query_TotalEncs = """ SELECT count(1) FROM ( SELECT DISTINCT encntr_id FROM encounter WHERE encntr_complete_dt_tm < 4000000000000 AND loc_facility_cd = '633867' ) t; """ cur.execute(query_TotalEncs) cur.fetchall()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
For admit_type_cd!='0' & encntr_type_class_cd='391
query_TotalEncs = """ SELECT count(1) FROM ( SELECT DISTINCT encntr_id FROM encounter WHERE encntr_complete_dt_tm < 4e12 AND loc_facility_cd = '633867' AND admit_type_cd!='0' AND encntr_type_class_cd='391' ) t; """ cur.execute(query_TotalEncs) cur.fetchall()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Examining distribution of encounter durations (with loc_facility_cd) Analyze the durations of the RRT event patients.
query_count = """ SELECT count(*) FROM ( SELECT DISTINCT ce.encntr_id FROM clinical_event ce INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id WHERE ce.event_cd = '54411998' AND ce.result_status_cd NOT IN ('31', '36') AND ce.valid_until_dt_tm > 4e12 AND ce.event_class_cd not in ('654...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Let's look at durations for inpatients WITH RRTs from the Main Hospital where encounter_admit_type is not zero
query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours , enc.reason_for_visit , enc.admit_src_cd , enc.admit_ty...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Let's look at durations for inpatients WITHOUT RRTs from the Main Hospital where encounter_admit_type is not zero
query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm , enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours , enc.reason_for_visit , enc.admit_src_cd , enc.admi...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Plot both together to see how encounter duration distributions are different
plt.figure(figsize = (10,8)) df_rrt.diff_hours.plot.hist(alpha=0.4, bins=400,normed=True) df_nonrrt.diff_hours.plot.hist(alpha=0.4, bins=800,normed=True) plt.xlabel('Hospital Stay Durations, hours', fontsize=14) plt.ylabel('Normalized Frequency', fontsize=14) plt.legend(['RRT', 'Non RRT']) plt.tick_params(labelsize=14)...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Even accounting for the hospital, inpatients status, and accounting for some admit_type_cd, the durations are still quite different betwen RRT & non-RRT. Trying some subset vizualizations -- these show no difference
print df_nonrrt.admit_type_cd.value_counts() print print df_rrt.admit_type_cd.value_counts() print df_nonrrt.admit_src_cd.value_counts() print print df_rrt.admit_src_cd.value_counts() plt.figure(figsize = (10,8)) df_rrt[df_rrt.admit_type_cd=='309203'].diff_hours.plot.hist(alpha=0.4, bins=300,normed=True) df_nonrrt[df...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Despite controlling for patient parameters, patients with RRT events stay in the hospital longer than non-RRT event having patients. Rerun previous EDA on hospital & patient types Let's take a step back and look at the encounter table, for all hospitals and patient types [but using corrected time duration].
# For encounters with RRT events query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm , enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours , enc.reason_for_visit ...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
The notebook Probe_encounter_types_classes explores admit type, class types & counts
plt.figure() df['diff_hours'].plot.hist(bins=500) plt.xlabel("Hospital Stay Duration, days") plt.title("Range of stays, patients with RRT") plt.xlim(0, 2000)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Group by facility We want to pull from similar patient populations
df.head() df.loc_desc.value_counts() grouped = df.groupby('loc_desc') grouped.describe()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Most number of results from 633867, or The Main Hospital
df.diff_hours.hist(by=df.loc_desc, bins=300) # Use locations 4382264, 4382273, 633867 plt.figure(figsize=(12, 6)) df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True) df[df['loc_facility_cd']=='4382264']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True) df[df['loc_facili...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Looks like these three locations (633867, 4382264, 4382273) have about the same distribution. Appropriate test to verify this: 2-sample Kolmogorov-Smirnov, if you're willing to compare pairwise...other tests? Wikipedia has a good article with references: https://en.wikipedia.org/wiki/Kolmogorov–Smirnov_test. Null hypot...
from scipy.stats import ks_2samp ks_2samp(df[df['loc_facility_cd']=='633867']['diff_hours'],df[df['loc_facility_cd']=='4382264']['diff_hours']) # Critical test statistic at alpha = 0.05: = 1.36 * sqrt((n1+n2)/n1*n2) = 1.36*(sqrt((1775+582)/(1775*582)) = 0.065 # 0.074 > 0.065 -> null hypothesis rejected at level 0.05....
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
From scipy documentation: "If the KS statistic is small or the p-value is high, then we cannot reject the hypothesis that the distributions of the two samples are the same" Null hypothesis: the distributions are the same. Looks like samples from 4382273 are different... plot that & 633867
plt.figure(figsize=(10,8)) df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.4, bins=500,normed=True) df[df['loc_facility_cd']=='4382273']['diff_hours'].plot.hist(alpha=0.4, bins=700,normed=True) plt.xlabel('Hospital Stay Durations, hours') plt.legend(['633867', '4382273']) plt.xlim(0, 1000)
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Let's compare encounter duration histograms for patients with RRT & without RRT events, and see if there is a right subset of data to be selected for modeling (There is)
df.columns df.admit_src_desc.value_counts() df.enc_type_class_desc.value_counts() # vast majority are inpatient df.enc_type_desc.value_counts() df.admit_type_desc.value_counts()
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Plot RRT & non-RRT with different codes
# For encounters without RRT events, from Main Hospital. # takes a while to run -- several minutes query = """ SELECT DISTINCT ce.encntr_id , COALESCE(tci.checkin_dt_tm , enc.arrive_dt_tm) AS checkin_dt_tm , enc.depart_dt_tm as depart_dt_tm , (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.ar...
Data Science Notebooks/Notebooks/EDA/encounter_durations[EDA].ipynb
nikitaswinnen/model-for-predicting-rapid-response-team-events
apache-2.0
Softmax Classifier Sanity Check: Overfit Small Portion
script = """ source("breastcancer/softmax_clf.dml") as clf # Hyperparameters & Settings lr = 1e-2 # learning rate mu = 0.9 # momentum decay = 0.999 # learning rate decay constant batch_size = 32 epochs = 500 log_interval = 1 n = 200 # sample size for overfitting sanity check # Train [W, b] = clf::train(X[1:n,], Y...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Train
script = """ source("breastcancer/softmax_clf.dml") as clf # Hyperparameters & Settings lr = 5e-7 # learning rate mu = 0.5 # momentum decay = 0.999 # learning rate decay constant batch_size = 32 epochs = 1 log_interval = 10 # Train [W, b] = clf::train(X, Y, X_val, Y_val, lr, mu, decay, batch_size, epochs, log_inte...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Eval
script = """ source("breastcancer/softmax_clf.dml") as clf # Eval probs = clf::predict(X, W, b) [loss, accuracy] = clf::eval(probs, Y) probs_val = clf::predict(X_val, W, b) [loss_val, accuracy_val] = clf::eval(probs_val, Y_val) """ outputs = ("loss", "accuracy", "loss_val", "accuracy_val") script = dml(script).input(X...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
LeNet-like ConvNet Sanity Check: Overfit Small Portion
script = """ source("breastcancer/convnet.dml") as clf # Hyperparameters & Settings lr = 1e-2 # learning rate mu = 0.9 # momentum decay = 0.999 # learning rate decay constant lambda = 0 #5e-04 batch_size = 32 epochs = 300 log_interval = 1 dir = "models/lenet-cnn/sanity/" n = 200 # sample size for overfitting sani...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Hyperparameter Search
script = """ source("breastcancer/convnet.dml") as clf dir = "models/lenet-cnn/hyperparam-search/" # TODO: Fix `parfor` so that it can be efficiently used for hyperparameter tuning j = 1 while(j < 2) { #parfor(j in 1:10000, par=6) { # Hyperparameter Sampling & Settings lr = 10 ^ as.scalar(rand(rows=1, cols=1, min...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Train
ml.setStatistics(True) ml.setExplain(True) # sc.setLogLevel("OFF") script = """ source("breastcancer/convnet_distrib_sgd.dml") as clf # Hyperparameters & Settings lr = 0.00205 # learning rate mu = 0.632 # momentum decay = 0.99 # learning rate decay constant lambda = 0.00385 batch_size = 1 parallel_batches = 19 ep...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Eval
script = """ source("breastcancer/convnet_distrib_sgd.dml") as clf # Eval probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) [loss, accuracy] = clf::eval(probs, Y) probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2) [loss_val, accuracy_val]...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
# script = """ # N = 102400 # num examples # C = 3 # num input channels # Hin = 256 # input height # Win = 256 # input width # X = rand(rows=N, cols=C*Hin*Win, pdf="normal") # """ # outputs = "X" # script = dml(script).output(*outputs) # thisX = ml.execute(script).get(*outputs) # thisX # script = """ # f = functio...
projects/breast_cancer/MachineLearning.ipynb
dusenberrymw/incubator-systemml
apache-2.0
Create and fit Spark ML model
from pyspark.ml.classification import LogisticRegression from pyspark.ml.feature import VectorAssembler from pyspark.ml import Pipeline # Create feature vectors. Ignore arr_delay and it's derivate, is_late feature_assembler = VectorAssembler( inputCols=[x for x in training.columns if x not in ["is_late","arrdelay"...
spark/Logistic Regression Example.ipynb
zoltanctoth/bigdata-training
gpl-2.0
Predict whether the aircraft will be late
predicted = model.transform(test) predicted.take(1)
spark/Logistic Regression Example.ipynb
zoltanctoth/bigdata-training
gpl-2.0
Check model performance
predicted = predicted.withColumn("is_late",is_late(predicted.arrdelay)) predicted.crosstab("is_late","prediction").show()
spark/Logistic Regression Example.ipynb
zoltanctoth/bigdata-training
gpl-2.0
The data goes all the way back to 1967 and is updated weekly. Blaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression:
fred_ccsa.count()
notebooks/data/quandl.fred_ccsa/notebook.ipynb
quantopian/research_public
apache-2.0
Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame
unrate_df = odo(fred_ccsa, pd.DataFrame) unrate_df.plot(x='asof_date', y='value') plt.xlabel("As Of Date (asof_date)") plt.ylabel("Unemployment Claims") plt.title("United States Unemployment Claims") plt.legend().set_visible(False) unrate_recent = odo(fred_ccsa[fred_ccsa.asof_date >= '2002-01-01'], pd.DataFrame) unr...
notebooks/data/quandl.fred_ccsa/notebook.ipynb
quantopian/research_public
apache-2.0
Table of Contents Outer Join Operator CHAR datatype size increase Binary Data Type Boolean Data Type Synonyms for Data Types Function Synonymns Netezza Compatibility Select Enhancements Hexadecimal Functions Table Creation with Data <a id='outer'></a> Outer Join Operator Db2 allows the use of the ...
%%sql SELECT DEPTNAME, LASTNAME FROM DEPARTMENT D LEFT OUTER JOIN EMPLOYEE E ON D.DEPTNO = E.WORKDEPT
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
TRANSLATE Function The translate function syntax in Db2 is: <pre> TRANSLATE(expression, to_string, from_string, padding) </pre> The TRANSLATE function returns a value in which one or more characters in a string expression might have been converted to other characters. The function converts all the characters in char-...
%%sql SET SQL_COMPAT = 'NPS'; VALUES TRANSLATE('Hello');
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
OFFSET Extension The FETCH FIRST n ROWS ONLY clause can also include an OFFSET keyword. The OFFSET keyword allows you to retrieve the answer set after skipping "n" number of rows. The syntax of the OFFSET keyword is: <pre> OFFSET n ROWS FETCH FIRST x ROWS ONLY </pre> The OFFSET n ROWS must precede the FETCH FIRST x R...
%%sql SELECT LASTNAME FROM EMPLOYEE FETCH FIRST 10 ROWS ONLY
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
Back to Top <a id="create"><a/> Table Creation Extensions The CREATE TABLE statement can now use a SELECT clause to generate the definition and LOAD the data at the same time. Create Table Syntax The syntax of the CREATE table statement has been extended with the AS (SELECT ...) WITH DATA clause: <pre> CREATE TABLE <n...
%sql -q DROP TABLE AS_EMP %sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS FROM EMPLOYEE) DEFINITION ONLY;
v1/Db2 11 Compatibility Features.ipynb
DB2-Samples/db2jupyter
apache-2.0
A growing collection of tasks is readily available in pyannote.audio.tasks...
from pyannote.audio.tasks import __all__ as TASKS; print('\n'.join(TASKS))
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
... but you will eventually want to use pyannote.audio to address a different task. In this example, we will add a new task addressing the sound event detection problem. Problem specification A problem is expected to be solved by a model $f$ that takes an audio chunk $X$ as input and returns its predicted solution $\h...
from pyannote.audio.core.task import Resolution resolution = Resolution.CHUNK
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
Type of problem Similarly, the type of your problem may fall into one of these generic machine learning categories: * Problem.BINARY_CLASSIFICATION for binary classification * Problem.MONO_LABEL_CLASSIFICATION for multi-class classification * Problem.MULTI_LABEL_CLASSIFICATION for multi-label classification * Problem....
from pyannote.audio.core.task import Problem problem = Problem.MULTI_LABEL_CLASSIFICATION from pyannote.audio.core.task import Specifications specifications = Specifications( problem=problem, resolution=resolution, duration=5.0, classes=["Speech", "Dog", "Cat", "Alarm_bell_ringing", "Dishes", ...
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
A task is expected to be solved by a model $f$ that (usually) takes an audio chunk $X$ as input and returns its predicted solution $\hat{y} = f(X)$. To help training the model $f$, the task $\mathcal{T}$ is in charge of - generating $(X, y)$ training samples using the dataset - defining the loss function $\mathcal{L...
from typing import Optional import torch import torch.nn as nn import numpy as np from pyannote.core import Annotation from pyannote.audio import Model from pyannote.audio.core.task import Task, Resolution # Your custom task must be a subclass of `pyannote.audio.core.task.Task` class SoundEventDetection(Task): """...
tutorials/add_your_own_task.ipynb
pyannote/pyannote-audio
mit
Вы могли заметить, что мы нигде не объявили тип переменных a, b и c. В Python этого делать не надо. Язык сам выберет тип по значению, которое вы положили в переменную. Для переменной a это тип int (целое число). Для b&nbsp;— str (строка). Для c&nbsp;— float (вещественное число). В ближайшем будущем вы скорее всего позн...
a = 5.0 s = "LKSH students are awesome =^_^=" print(type(a)) print(type(b))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Параллельное присваивание В Python можно присвоит значения сразу нескольким переменным:
a, b = 3, 5 print(a) print(b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
При этом Python сначала вычисляет все значения справа, а потом уже присваивает вычисленные значения переменным слева:
a = 3 b = 5 a, b = b, a + b print(a) print(b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Это позволяет, например, поменять значения двух переменных в одну строку:
a = "apple" b = "banana" a, b = b, a print(a) print(b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Ввод-вывод Как вы уже видели, для вывода на экран в Python есть функция print. Ей можно передавать несколько значений через запятую — они будут выведены в одной строке через пробел:
a = 2 b = 3 print(a, "+", b, "=", a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Для ввода с клавиатуры есть функция input. Она считывает одну строку целиком:
a = input() b = input() print(a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Ага, что-то пошло не так! Мы получили 23 вместо 5. Так произошло, потому что input() возращает строку (str), а не число (int). Чтобы это исправить нам надо явно преобразовать результат функции input() к типу int.
a = int(input()) b = int(input()) print(a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Так-то лучше :) Частая ошибка — забыть внутренние скобки после функции input. Давайте посмотрим, что в этом случае произойдёт:
a = int(input) b = int(input) print(a + b)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Эту ошибку можно перевести с английского так: ОшибкаТипа: аргумент функции int() должен быть строкой, последовательностью байтов или числом, а не функцией Теперь вы знаете что делать, если получите такую ошибку ;) Арифметические операции Давайте научимся складывать, умножать, вычитать и производить другие операции с це...
print(11 + 7, 11 - 7, 11 * 7, (2 + 9) * (12 - 5))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Вещественное деление всегда даёт вещественное число (float) в результате, независимо от аргументов (если делитель не 0):
print(12 / 8, 12 / 4, 12 / -7)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Результат целочисленного деления — это результат вещественного деления, округлённый до ближайшего меньшего целого:
print(12 // 8, 12 // 4, 12 // -7)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Остаток от деления — это то что осталось от числа после целочисленного деления. Если c = a // b, то a можно представить в виде a = c * b + r. В этом случае r — это остаток от деления. Пример: a = 20, b = 8, c = a // b = 2. Тогда a = c * b + r превратится в 20 = 2 * 8 + 4. Остаток от деления — 4.
print(12 % 8, 12 % 4, 12 % -7)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Возведение a в степень b — это перемножение a на само себя b раз. В математике обозначается как $a^b$.
print(5 ** 2, 2 ** 4, 13 ** 0)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Возведение в степень работает для вещественных a и отрицательных b. Число в отрицательной степени — это единица делённое на то же число в положительной степени: $a^{-b} = \frac{1}{a^b}$
print(2.5 ** 2, 2 ** -3)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Давайте посмотрим что получится, если возвести в большую степень целое число:
print(5 ** 100)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В отличии от C++ или Pascal, Python правильно считает результат, даже если в результате получается очень большое число. А что если возвести вещественное число в большую степень?
print(5.0 ** 100)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Запись вида &lt;число&gt;e&lt;степень&gt; — это другой способ записать $\text{<число>} \cdot 10^\text{<степень>}$. То есть: $$\text{7.888609052210118e+69} = 7.888609052210118 \cdot 10^{69}$$ а это то же самое, что и 7888609052210118000000000000000000000000000000000000000000000000000000. Этот результат не настолько точе...
print(2 ** 0.5, 9 ** 0.5)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В школе вам, наверное, рассказывали, что квадратный корень нельзя извлекать из отрицательных чисел. С++ и Pascal при попытке сделать это выдадут ошибку. Давайте посмотрим, что сделает Python:
print((-4) ** 0.5)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В общем, это не совсем правда. Извлекать квадратный корень из отрицательных чисел, всё-таки, можно, но в результате получится не вещественное, а так называемое комплексное число. Если вы получили страшную такую штуку в своей программе, скорее всего ваш код взял корень из отрицательного числа, а значит вам надо искать в...
a = 4 b = 11 c = (a ** 2 + b * 3) / (9 - b % (a + 1)) print(c)
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
В примере выше переменной c присвоено значение выражения $$\frac{a^2 + b \cdot 3}{9 - b \text{ mod } (a + 1)}$$ При отсутствии скобок арфиметические операции в выражении вычисляются в порядке приоритета (см. таблицу выше). Сначала выполняются операции с приоритетом 1, потом с приоритетом 2 и т.д. При одинаковом приорит...
print(2 * 2 + 2) print(2 * (2 + 2))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Преобразование типов Если у вас есть значение одного типа, то вы можете преобразовать его к другому типу, вызвав функцию с таким же именем:
a = "-15" print(a, int(a), float(a))
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
Больше примеров:
# a_int, b_float, c_str - это просто имена переменных. # Они так названы, чтобы было проще разобраться, где какое значение лежит. a_int = 3 b_float = 5.0 c_str = "10" print(a_int, b_float, c_str) # При попытке сложить без преобразования мы получили бы ошибку, потому что Python # не умеет складывать числа со строками. ...
crash-course/variables-and-expressions.ipynb
citxx/sis-python
mit
梯度提升树(Gradient Boosted Trees):模型理解 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/estimator/boosted_trees_model_understanding"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png"> 在 TensorFlow.org 上查看</a> </td> <td> <a tar...
!pip install statsmodels import numpy as np import pandas as pd from IPython.display import clear_output # Load dataset. dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv') dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv') y_train = dftrain.pop('surv...
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
有关特征的描述,请参阅之前的教程。 创建特征列, 输入函数并训练 estimator 数据预处理 特征处理,使用原始的数值特征和独热编码(one-hot-encoding)处理过的非数值特征(如性别,舱位)别建立数据集。
fc = tf.feature_column CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck', 'embark_town', 'alone'] NUMERIC_COLUMNS = ['age', 'fare'] def one_hot_cat_column(feature_name, vocab): return fc.indicator_column( fc.categorical_column_with_vocabulary_list(feature_name,...
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0