markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
构建输入 pipeline 使用 API tf.data 中的 from_tensor_slices 方法建立输入方程来从 Pandas 中直接读取数据。
# 当数据集小的时候,将整个数据集作为一个 batch。 NUM_EXAMPLES = len(y_train) def make_input_fn(X, y, n_epochs=None, shuffle=True): def input_fn(): dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y)) if shuffle: dataset = dataset.shuffle(NUM_EXAMPLES) # 训练时让数据迭代尽可能多次 (n_epochs=None)。 dataset = (dataset .repeat(n_epochs) .batch(NUM_EXAMPLES)) return dataset return input_fn # 训练并评估输入函数。 train_input_fn = make_input_fn(dftrain, y_train) eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型
params = { 'n_trees': 50, 'max_depth': 3, 'n_batches_per_layer': 1, # You must enable center_bias = True to get DFCs. This will force the model to # make an initial prediction before using any features (e.g. use the mean of # the training labels for regression or log odds for classification when # using cross entropy loss). 'center_bias': True } est = tf.estimator.BoostedTreesClassifier(feature_columns, **params) # Train model. est.train(train_input_fn, max_steps=100) # Evaluation. results = est.evaluate(eval_input_fn) clear_output() pd.Series(results).to_frame()
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
出于性能原因,当您的数据适合内存时,我们建议在 tf.estimator.BoostedTreesClassifier 函数中使用参数 train_in_memory=True。但是,如果训练时间不是关注的问题,或者如果您有一个非常大的数据集并且想要进行分布式训练,请使用上面显示的 tf.estimator.BoostedTrees API。 当您使用此方法时,请不要对数据分批(batch),而是对整个数据集进行操作。
in_memory_params = dict(params) in_memory_params['n_batches_per_layer'] = 1 # In-memory input_fn does not use batching. def make_inmemory_train_input_fn(X, y): y = np.expand_dims(y, axis=1) def input_fn(): return dict(X), y return input_fn train_input_fn = make_inmemory_train_input_fn(dftrain, y_train) # Train the model. est = tf.estimator.BoostedTreesClassifier( feature_columns, train_in_memory=True, **in_memory_params) est.train(train_input_fn) print(est.evaluate(eval_input_fn))
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
模型说明与绘制
import matplotlib.pyplot as plt import seaborn as sns sns_colors = sns.color_palette('colorblind')
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
局部可解释性(Local interpretability) 接下来,您将输出定向特征贡献(DFCs)来解释单个预测。输出依据 Palczewska et al 和 Saabas 在 解释随机森林(Interpreting Random Forests) 中提出的方法产生(scikit-learn 中随机森林相关的包 treeinterpreter 使用原理相同的远离). 使用以下语句输出 DFCs: pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn)) (注:该方法被命名为实验性,因为我们可能会在放弃实验性前缀之前修改 API。)
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn)) # Create DFC Pandas dataframe. labels = y_eval.values probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts]) df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts]) df_dfc.describe().T
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
DFC 有一个非常好的属性,即贡献 + 偏差的总和等于给定样本的预测。
# Sum of DFCs + bias == probabality. bias = pred_dicts[0]['bias'] dfc_prob = df_dfc.sum(axis=1) + bias np.testing.assert_almost_equal(dfc_prob.values, probs.values)
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
为单个乘客绘制 DFCs,绘图时按贡献的方向性对其进行涂色并添加特征的值。
# Boilerplate code for plotting :) def _get_color(value): """To make positive DFCs plot green, negative DFCs plot red.""" green, red = sns.color_palette()[2:4] if value >= 0: return green return red def _add_feature_values(feature_values, ax): """Display feature's values on left of plot.""" x_coord = ax.get_xlim()[0] OFFSET = 0.15 for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()): t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12) t.set_bbox(dict(facecolor='white', alpha=0.5)) from matplotlib.font_manager import FontProperties font = FontProperties() font.set_weight('bold') t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue', fontproperties=font, size=12) def plot_example(example): TOP_N = 8 # View top 8 features. sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude. example = example[sorted_ix] colors = example.map(_get_color).tolist() ax = example.to_frame().plot(kind='barh', color=colors, legend=None, alpha=0.75, figsize=(10,6)) ax.grid(False, axis='y') ax.set_yticklabels(ax.get_yticklabels(), size=14) # Add feature values. _add_feature_values(dfeval.iloc[ID][sorted_ix], ax) return ax # Plot results. ID = 182 example = df_dfc.iloc[ID] # Choose ith example from evaluation set. TOP_N = 8 # View top 8 features. sorted_ix = example.abs().sort_values()[-TOP_N:].index ax = plot_example(example) ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID])) ax.set_xlabel('Contribution to predicted probability', size=14) plt.show()
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
更大的贡献值意味着对模型的预测有更大的影响。负的贡献表示此样例该特征的值减小了减小了模型的预测,正贡献值表示增加了模型的预测。 您也可以使用小提琴图(violin plot)来绘制该样例的 DFCs 并与整体分布比较。
# Boilerplate plotting code. def dist_violin_plot(df_dfc, ID): # Initialize plot. fig, ax = plt.subplots(1, 1, figsize=(10, 6)) # Create example dataframe. TOP_N = 8 # View top 8 features. example = df_dfc.iloc[ID] ix = example.abs().sort_values()[-TOP_N:].index example = example[ix] example_df = example.to_frame(name='dfc') # Add contributions of entire distribution. parts=ax.violinplot([df_dfc[w] for w in ix], vert=False, showextrema=False, widths=0.7, positions=np.arange(len(ix))) face_color = sns_colors[0] alpha = 0.15 for pc in parts['bodies']: pc.set_facecolor(face_color) pc.set_alpha(alpha) # Add feature values. _add_feature_values(dfeval.iloc[ID][sorted_ix], ax) # Add local contributions. ax.scatter(example, np.arange(example.shape[0]), color=sns.color_palette()[2], s=100, marker="s", label='contributions for example') # Legend # Proxy plot, to show violinplot dist on legend. ax.plot([0,0], [1,1], label='eval set contributions\ndistributions', color=face_color, alpha=alpha, linewidth=10) legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large', frameon=True) legend.get_frame().set_facecolor('white') # Format plot. ax.set_yticks(np.arange(example.shape[0])) ax.set_yticklabels(example.index) ax.grid(False, axis='y') ax.set_xlabel('Contribution to predicted probability', size=14)
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
绘制此样例。
dist_violin_plot(df_dfc, ID) plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID])) plt.show()
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
最后,第三方的工具,如:LIME 和 shap 也可以帮助理解模型的各个预测。 全局特征重要性(Global feature importances) 此外,您或许想了解模型这个整体而不是单个预测。接下来,您将计算并使用: 使用 est.experimental_feature_importances 得到基于增益的特征重要性 排列特征重要性(Permutation feature importances) 使用 est.experimental_predict_with_explanations 得到总 DFCs。 基于增益的特征重要性在分离特定特征时测量损失的变化。而排列特征重要性是在评估集上通过每次打乱一个特征后观察模型性能的变化计算而出。 一般来说,排列特征重要性要优于基于增益的特征重要性,尽管这两种方法在潜在预测变量的测量范围或类别数量不确定时和特征相关联时不可信(来源)。 对不同种类特征重要性的更透彻概括和更翔实讨论请参考 这篇文章 。 基于增益的特征重要性(Gain-based feature importances) 基于增益的特征重要性使用 est.experimental_feature_importances 内置到 TensorFlow 提升树 Estimator 中。
importances = est.experimental_feature_importances(normalize=True) df_imp = pd.Series(importances) # Visualize importances. N = 8 ax = (df_imp.iloc[0:N][::-1] .plot(kind='barh', color=sns_colors[0], title='Gain feature importances', figsize=(10, 6))) ax.grid(False, axis='y')
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
平均绝对 DFCs 您还可以得到绝对DFCs的平均值来从全局的角度分析影响。
# Plot. dfc_mean = df_dfc.abs().mean() N = 8 sorted_ix = dfc_mean.abs().sort_values()[-N:].index # Average and sort by absolute. ax = dfc_mean[sorted_ix].plot(kind='barh', color=sns_colors[1], title='Mean |directional feature contributions|', figsize=(10, 6)) ax.grid(False, axis='y')
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
您可以看到 DFCs 如何随特征的值变化而变化。
FEATURE = 'fare' feature = pd.Series(df_dfc[FEATURE].values, index=dfeval[FEATURE].values).sort_index() ax = sns.regplot(feature.index.values, feature.values, lowess=True) ax.set_ylabel('contribution') ax.set_xlabel(FEATURE) ax.set_xlim(0, 100) plt.show()
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
排列特征重要性(Permutation feature importances)
def permutation_importances(est, X_eval, y_eval, metric, features): """Column by column, shuffle values and observe effect on eval set. source: http://explained.ai/rf-importance/index.html A similar approach can be done during training. See "Drop-column importance" in the above article.""" baseline = metric(est, X_eval, y_eval) imp = [] for col in features: save = X_eval[col].copy() X_eval[col] = np.random.permutation(X_eval[col]) m = metric(est, X_eval, y_eval) X_eval[col] = save imp.append(baseline - m) return np.array(imp) def accuracy_metric(est, X, y): """TensorFlow estimator accuracy.""" eval_input_fn = make_input_fn(X, y=y, shuffle=False, n_epochs=1) return est.evaluate(input_fn=eval_input_fn)['accuracy'] features = CATEGORICAL_COLUMNS + NUMERIC_COLUMNS importances = permutation_importances(est, dfeval, y_eval, accuracy_metric, features) df_imp = pd.Series(importances, index=features) sorted_ix = df_imp.abs().sort_values().index ax = df_imp[sorted_ix][-5:].plot(kind='barh', color=sns_colors[2], figsize=(10, 6)) ax.grid(False, axis='y') ax.set_title('Permutation feature importance') plt.show()
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
可视化模型拟合过程 首先,使用以下公式构建训练数据: $$z=x* e^{-x^2 - y^2}$$ 其中, (z) 是您要试着预测的值(因变量),(x) 和 (y) 是特征。
from numpy.random import uniform, seed from scipy.interpolate import griddata # Create fake data seed(0) npts = 5000 x = uniform(-2, 2, npts) y = uniform(-2, 2, npts) z = x*np.exp(-x**2 - y**2) xy = np.zeros((2,np.size(x))) xy[0] = x xy[1] = y xy = xy.T # Prep data for training. df = pd.DataFrame({'x': x, 'y': y, 'z': z}) xi = np.linspace(-2.0, 2.0, 200), yi = np.linspace(-2.1, 2.1, 210), xi,yi = np.meshgrid(xi, yi) df_predict = pd.DataFrame({ 'x' : xi.flatten(), 'y' : yi.flatten(), }) predict_shape = xi.shape def plot_contour(x, y, z, **kwargs): # Grid the data. plt.figure(figsize=(10, 8)) # Contour the gridded data, plotting dots at the nonuniform data points. CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k') CS = plt.contourf(x, y, z, 15, vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r') plt.colorbar() # Draw colorbar. # Plot data points. plt.xlim(-2, 2) plt.ylim(-2, 2)
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
您可以可视化这个方程,红色代表较大的值。
zi = griddata(xy, z, (xi, yi), method='linear', fill_value='0') plot_contour(xi, yi, zi) plt.scatter(df.x, df.y, marker='.') plt.title('Contour on training data') plt.show() fc = [tf.feature_column.numeric_column('x'), tf.feature_column.numeric_column('y')] def predict(est): """Predictions from a given estimator.""" predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict)) preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)]) return preds.reshape(predict_shape)
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
首先,我们尝试用线性模型拟合数据。
train_input_fn = make_input_fn(df, df.z) est = tf.estimator.LinearRegressor(fc) est.train(train_input_fn, max_steps=500); plot_contour(xi, yi, predict(est))
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
可见,拟合效果并不好。接下来,我们试着用 GBDT 模型拟合并了解模型是如何拟合方程的。
n_trees = 37 #@param {type: "slider", min: 1, max: 80, step: 1} est = tf.estimator.BoostedTreesRegressor(fc, n_batches_per_layer=1, n_trees=n_trees) est.train(train_input_fn, max_steps=500) clear_output() plot_contour(xi, yi, predict(est)) plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20) plt.show()
site/zh-cn/tutorials/estimator/boosted_trees_model_understanding.ipynb
tensorflow/docs-l10n
apache-2.0
Introduction In the last few years speed dating popularity has grown quickly. Despite its popularity lots of people don't seem as satisfied as they'd like. Most users don't end up finding what they were looking for. That's why a crew of data scientists is going to study data on previous speed dating events in order to make it easier for our users to find their other halves. Disclaimer: Most of the data has been recorded from heterosexual encounters this makes it difficult to inferentiate the data into our system. (New Speed Dating events are more plural taking into account all sexs and genders) What are we looking for? Finding questions The first thing we have to do is ask ourselfs what conclusions we hope this study leads to. In other words finding the questions this project is going to answer to. First of all we want to maximize the likelyhood two people fall in love. "Are these two people going to match?" - (After selecting two people from a new wave) Secondly we want to be able to group people in order to choose them for special waves "Which group does someone correspond to?" - (After selecting two people from a new wave) "Speed Dating" data tidying The first thing to do is fix possible errors so that it is easier to approach the solution.
speedDatingDF = pd.read_csv("Speed Dating Data.csv",encoding = "ISO-8859-1") #speedDatingDF.dtypes #We can see which type has each attr.
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
Every variable is described in the piece of writing called "Speed Dating Data.doc" which is in the same directory as this notebook. And in cases such as gender or race we can see using numbers is not the best choice. That's why we are going to modify those variables which are not countable. All this changes were done after the reading of "The elements of data analytic style" as recommended in the course webpage Tyding choice As most of the things done when data tyding do not involve any programming skills I have prefered to put them into a black box where they don't disturb the eye of a real programmer. The processing is inside de function tidy_this_data of the python file called pretty_notebook.py which is in the same repository as this project.
speedDatingDF = pretty_notebook.tidy_this_data(speedDatingDF)
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
Evaluating our changes With regards to the tidying process, a warrant of correct processing is needed. Looking into the issue two big problems appear. The first problem is: People didn't finish their evaluation and a lot of NAN values can be found in the last variables of the data frame.
#This same behaviour can be seen in most of the last attributes of feedback print ( pretty_notebook.values_in_a_column(speedDatingDF.met) )
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
As the author thinks the problem in this case is that people could not be followed up, we will simply ignore most of the data. The second problem is: People entered wrong values and this were transcribed into the dataframe.
#Taking the same column and looking at the values different from NAN an error appears. values = pretty_notebook.values_in_a_column(speedDatingDF.met) values = [v for v in values if not np.isnan(v)] print (values) #True and False are not the unique values
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
For uncountable variables this last error is an error only found in the met variable so it is not a big issue but it must be taken into account afterwards when the variables representing a percentage come into stage. The correction is changing this values for True as the author of the study supposes people wanted to say they had met their pair more than once.
for v in values[2:]: #correction done HERE !!! speedDatingDF.loc[speedDatingDF['met'] == v, 'met'] = True #We evaluate if the changes are right values = pretty_notebook.values_in_a_column(speedDatingDF.met) values = [v for v in values if not np.isnan(v)] print (values) #True and False ARE the UNIQUE values
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
Countable data Now it is the time for data that can be evaluated in order to tidy the data some plots will be done to fix possible problems on data scale or distribution. For instance we know data from different waves has different interval values, that's why we are going to let each element between 0 and 1:
pretty_notebook.normalize_data(speedDatingDF) eachWave = speedDatingDF.groupby('wave') eachWave.get_group(7).iloc[0:10,69:75]
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
And for other waves, where the range of values was different, we also have numbers between 0 and 1:
eachWave.get_group(8).iloc[0:10,69:75]
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
In the end we have achieved a pretty normalised dataset. Now it's the turn for the science to begin.
# SAVING DATA IN ORDER TO SAVE TIME speedDatingDF.to_csv('cleanDATAFRAME.csv',index=False) #IF YOU TRUST MY CLEANING PROCESS speedDatingDF = pd.read_csv('cleanDATAFRAME.csv')
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
Data analysis After having tidied our data we will use some of the tools we have developed within the semester to evaluate if two people are going to match. We have chosen to show how cross validation ameliorates the result. We will first use a technique which does not guarantee cross validation and then cross-validation. That means, we will train several Machine Learning objects and afterwards they will only influence those points they do not know.
labels = [] for boolean in speedDatingDF.match: if boolean: labels.append(1) else: labels.append(-1) labels = np.array(labels) #If someone got match 1 else -1 the_set = dsf.LabeledSet(6) #We will fill it with the impression he causes and the things each one likes values = np.array(speedDatingDF.iloc[0:,69:75]) #What the person asked looks for for i in range(len(values)): value = values[i] label = labels[i] the_set.addExample(value,label) foret = dsf.ClassifierBaggingTree(5,0.3,0.7,True) foret.train(the_set) print("Bagging of decision trees (5 trees): accuracy totale: data=%.4f "%(foret.accuracy(the_set))) perceps = dsf.ClassifierOOBPerceptron(5,0.3,0.0,True) perceps.train(the_set) print("Out of the bag with perceptrons (5 perceptrons): accuracy totale: data=%.4f "%(perceps.accuracy(the_set))) foretOOB = dsf.ClassifierOOBTree(5,0.3,0.7,True) foretOOB.train(the_set) print("Out of the bag with trees (5 trees): accuracy totale: data=%.4f "%(foretOOB.accuracy(the_set)))
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
Answering to the 2nd question We want to visualize groups in orther to create events where people have more affinity. A way of visualizing this is by a radar chart.
measure_up, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,24:30].dropna(axis=0), 0.05, 100) looking_for, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,69:75].dropna(axis=0), 0.05, 100) others_looking_for, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,75:81].dropna(axis=0), 0.05, 100) possible_pair, l_aff = dsf.kmoyennes(3, speedDatingDF.iloc[0:,81:87].dropna(axis=0), 0.05, 100) data = [ ['attractive', 'sincere', 'intelligent','funny','ambitious','hobbies'], ('The surveyed measure up',np.array(measure_up)), ('The surveyed is looking for:',np.array(looking_for)), ('The surveyed thinks others are looking for:',np.array(others_looking_for)), ('Possible matches are looking for:',np.array(possible_pair)) ] rc.print_rc(data,3)
Speed Dating - "Are these two people gonna match?".ipynb
DiegoAsterio/speedy-Gonzales
gpl-3.0
Exercises Using data from the NSFG, make a scatter plot of birth weight versus mother’s age. Plot percentiles of birth weight versus mother’s age. Compute Pearson’s and Spearman’s correlations. How would you characterize the relationship between these variables?
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct") download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz" ) import first live, firsts, others = first.MakeFrames() live = live.dropna(subset=['agepreg', 'totalwgt_lb']) # Solution ages = live.agepreg weights = live.totalwgt_lb print('Corr', Corr(ages, weights)) print('SpearmanCorr', SpearmanCorr(ages, weights)) # Solution def BinnedPercentiles(df): """Bin the data by age and plot percentiles of weight for each bin. df: DataFrame """ bins = np.arange(10, 48, 3) indices = np.digitize(df.agepreg, bins) groups = df.groupby(indices) ages = [group.agepreg.mean() for i, group in groups][1:-1] cdfs = [thinkstats2.Cdf(group.totalwgt_lb) for i, group in groups][1:-1] thinkplot.PrePlot(3) for percent in [75, 50, 25]: weights = [cdf.Percentile(percent) for cdf in cdfs] label = '%dth' % percent thinkplot.Plot(ages, weights, label=label) thinkplot.Config(xlabel="Mother's age (years)", ylabel='Birth weight (lbs)', xlim=[14, 45], legend=True) BinnedPercentiles(live) # Solution thinkplot.Scatter(ages, weights, alpha=0.05, s=10) thinkplot.Config(xlabel='Age (years)', ylabel='Birth weight (lbs)', xlim=[10, 45], ylim=[0, 15], legend=False) # Solution # My conclusions: # 1) The scatterplot shows a weak relationship between the variables but # it is hard to see clearly. # 2) The correlations support this. Pearson's is around 0.07, Spearman's # is around 0.09. The difference between them suggests some influence # of outliers or a non-linear relationsip. # 3) Plotting percentiles of weight versus age suggests that the # relationship is non-linear. Birth weight increases more quickly # in the range of mother's age from 15 to 25. After that, the effect # is weaker.
solutions/chap07soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
OPTIONAL: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = None # signals into hidden layer hidden_outputs = None # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = None # signals into final output layer final_outputs = None # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = None # Output layer error is the difference between desired target and actual output. # TODO: Calculate the hidden layer's contribution to the error hidden_error = None # TODO: Backpropagated error terms - Replace these values with your calculations. output_error_term = None hidden_error_term = None # Weight step (input to hidden) delta_weights_i_h += None # Weight step (hidden to output) delta_weights_h_o += None # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += None # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += None # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = None # signals into hidden layer hidden_outputs = None # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = None # signals into final output layer final_outputs = None # signals from final output layer return final_outputs
ipnd-neural-network/Your_first_neural_network.ipynb
liumengjun/cn-deep-learning
mit
OPTIONAL: Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite)
ipnd-neural-network/Your_first_neural_network.ipynb
liumengjun/cn-deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
def MSE(y, Y): return np.mean((y-Y)**2) #Delete the following line if you have successfully implemented the NeuralNetowrk in the optional session from NN import NeuralNetwork import sys ### Set the hyperparameters here ### iterations = 5000 learning_rate = 0.1 hidden_nodes = 16 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(iterations): if (e > 500): network.lr = 0.01 if (e> 2000): network.lr = 0.001 # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(iterations))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim()
ipnd-neural-network/Your_first_neural_network.ipynb
liumengjun/cn-deep-learning
mit
Consistent models in DisMod-MR from Vivarium artifact draw Take i, r, f, p from a Vivarium artifact, and make a consistent version of them. See how it compares to the original.
np.random.seed(123456) # if dismod_mr is not installed, it should possible to use # !conda install --yes pymc # !pip install dismod_mr import dismod_mr # you also need one more pip installable package # !pip install vivarium_public_health import vivarium_public_health
examples/consistent_data_from_vivarium_artifact.ipynb
ihmeuw/dismod_mr
agpl-3.0
Consistent fit with all data Let's start with a consistent fit of the simulated PD data. This includes data on prevalence, incidence, and SMR, and the assumption that remission rate is zero. All together this counts as four different data types in the DisMod-II accounting.
from vivarium_public_health.dataset_manager import Artifact art = Artifact('/share/costeffectiveness/artifacts/obesity/obesity.hdf') art.keys def format_for_dismod(df, data_type): df = df.query('draw==0 and sex=="Female" and year_start==2017').copy() df['data_type'] = data_type df['area'] = 'all' df['standard_error'] = 0.001 df['upper_ci'] = np.nan df['lower_ci'] = np.nan df['effective_sample_size'] = 10_000 df['sex'] = 'total' df = df.rename({'age_group_start': 'age_start', 'age_group_end': 'age_end',}, axis=1) return df p = format_for_dismod(art.load('cause.ischemic_heart_disease.prevalence'), 'p') i = format_for_dismod(art.load('cause.ischemic_heart_disease.incidence'), 'i') f = format_for_dismod(art.load('cause.ischemic_heart_disease.excess_mortality'), 'f') m_all = format_for_dismod(art.load('cause.all_causes.cause_specific_mortality'), 'm_all') csmr = format_for_dismod(art.load('cause.ischemic_heart_disease.cause_specific_mortality'), 'csmr') # could also try 'pf' dm = dismod_mr.data.ModelData() dm.input_data = pd.concat([p, i, f, m_all, csmr ], ignore_index=True) for rate_type in 'ifr': dm.set_knots(rate_type, [0,40,60,80,90,100]) dm.set_level_value('i', age_before=30, age_after=101, value=0) dm.set_increasing('i', age_start=50, age_end=100) dm.set_level_value('p', value=0, age_before=30, age_after=101) dm.set_level_value('r', value=0, age_before=101, age_after=101) dm.input_data.data_type.value_counts() dm.setup_model(rate_model='normal', include_covariates=False) import pymc as pm m = pm.MAP(dm.vars) %%time m.fit(verbose=1) from IPython.core.pylabtools import figsize figsize(11, 5.5) dm.plot() !date
examples/consistent_data_from_vivarium_artifact.ipynb
ihmeuw/dismod_mr
agpl-3.0
Read raw data, preload to allow filtering
raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel # Pick a selection of magnetometer channels. A subset of all channels was used # to speed up the example. For a solution based on all MEG channels use # meg=True, selection=None and add grad=4000e-13 to the reject dictionary. # We could do this with a "picks" argument to Epochs and the LCMV functions, # but here we use raw.pick_types() to save memory. left_temporal_channels = mne.read_selection('Left-temporal') raw.pick_types(meg='mag', eeg=False, eog=False, stim=False, exclude='bads', selection=left_temporal_channels) reject = dict(mag=4e-12) # Re-normalize our empty-room projectors, which should be fine after # subselection raw.info.normalize_proj() # Setting time limits for reading epochs. Note that tmin and tmax are set so # that time-frequency beamforming will be performed for a wider range of time # points than will later be displayed on the final spectrogram. This ensures # that all time bins displayed represent an average of an equal number of time # windows. tmin, tmax = -0.55, 0.75 # s tmin_plot, tmax_plot = -0.3, 0.5 # s # Read epochs. Note that preload is set to False to enable tf_lcmv to read the # underlying raw object. # Filtering is then performed on raw data in tf_lcmv and the epochs # parameters passed here are used to create epochs from filtered data. However, # reading epochs without preloading means that bad epoch rejection is delayed # until later. To perform bad epoch rejection based on the reject parameter # passed here, run epochs.drop_bad(). This is done automatically in # tf_lcmv to reject bad epochs based on unfiltered data. event_id = 1 events = mne.read_events(event_fname) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, baseline=None, preload=False, reject=reject) # Read empty room noise, preload to allow filtering, and pick subselection raw_noise = mne.io.read_raw_fif(noise_fname, preload=True) raw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel raw_noise.pick_types(meg='mag', eeg=False, eog=False, stim=False, exclude='bads', selection=left_temporal_channels) raw_noise.info.normalize_proj() # Create artificial events for empty room noise data events_noise = make_fixed_length_events(raw_noise, event_id, duration=1.) # Create an epochs object using preload=True to reject bad epochs based on # unfiltered data epochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin, tmax, proj=True, baseline=None, preload=True, reject=reject) # Make sure the number of noise epochs is the same as data epochs epochs_noise = epochs_noise[:len(epochs.events)] # Read forward operator forward = mne.read_forward_solution(fname_fwd) # Read label label = mne.read_label(fname_label)
0.15/_downloads/plot_tf_lcmv.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We begin with implementing a function that takes as input our $X$ and $Y$ vectors of synthetic data generated by the linear function $y= 2x + 10$ the number of passes over the dataset we want to train on (epochs) the size of the batches the dataset (batch_size) and returns a tf.data.Dataset: Remark: Note that the last batch may not contain the exact number of elements you specified because the dataset was exhausted. If you want batches with the exact same number of elements per batch, we will have to discard the last batch by setting: python dataset = dataset.batch(batch_size, drop_remainder=True) We will do that here.
def create_dataset(X, Y, epochs, batch_size): dataset = tf.data.Dataset.from_tensor_slices((X, Y)) dataset = dataset.repeat(epochs).batch(batch_size, drop_remainder=True) return dataset
notebooks/introduction_to_tensorflow/solutions/2a_dataset_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Training loop The main difference now is that now, in the traning loop, we will iterate directly on the tf.data.Dataset generated by our create_dataset function. We will configure the dataset so that it iterates 250 times over our synthetic dataset in batches of 2.
EPOCHS = 250 BATCH_SIZE = 2 LEARNING_RATE = 0.02 MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n" w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) dataset = create_dataset(X, Y, epochs=EPOCHS, batch_size=BATCH_SIZE) for step, (X_batch, Y_batch) in enumerate(dataset): dw0, dw1 = compute_gradients(X_batch, Y_batch, w0, w1) w0.assign_sub(dw0 * LEARNING_RATE) w1.assign_sub(dw1 * LEARNING_RATE) if step % 100 == 0: loss = loss_mse(X_batch, Y_batch, w0, w1) print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy())) assert loss < 0.0001 assert abs(w0 - 2) < 0.001 assert abs(w1 - 10) < 0.001
notebooks/introduction_to_tensorflow/solutions/2a_dataset_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's now wrap the call to make_csv_dataset into its own function that will take only the file pattern (i.e. glob) where the dataset files are to be located:
def create_dataset(pattern): return tf.data.experimental.make_csv_dataset( pattern, 1, CSV_COLUMNS, DEFAULTS ) tempds = create_dataset("../data/taxi-train*") print(tempds)
notebooks/introduction_to_tensorflow/solutions/2a_dataset_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Transforming the features What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary: Remove the unwanted column "key" Keep the label separate from the features Let's first implement a funciton that takes as input a row (represented as an OrderedDict in our tf.data.Dataset as above) and then returns a tuple with two elements: The first element beeing the same OrderedDict with the label dropped The second element beeing the label itself (fare_amount) Note that we will need to also remove the key and pickup_datetime column, which we won't use.
UNWANTED_COLS = ["pickup_datetime", "key"] def features_and_labels(row_data): label = row_data.pop(LABEL_COLUMN) features = row_data for unwanted_col in UNWANTED_COLS: features.pop(unwanted_col) return features, label
notebooks/introduction_to_tensorflow/solutions/2a_dataset_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Batching Let's now refactor our create_dataset function so that it takes an additional argument batch_size and batch the data correspondingly. We will also use the features_and_labels function we implemented in order for our dataset to produce tuples of features and labels.
def create_dataset(pattern, batch_size): dataset = tf.data.experimental.make_csv_dataset( pattern, batch_size, CSV_COLUMNS, DEFAULTS ) return dataset.map(features_and_labels)
notebooks/introduction_to_tensorflow/solutions/2a_dataset_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Shuffling When training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely. Let's refactor our create_dataset function so that it shuffles the data, when the dataset is used for training. We will introduce a additional argument mode to our function to allow the function body to distinguish the case when it needs to shuffle the data (mode == "train") from when it shouldn't (mode == "eval"). Also, before returning we will want to prefetch 1 data point ahead of time (dataset.prefetch(1)) to speedup training:
def create_dataset(pattern, batch_size=1, mode="eval"): dataset = tf.data.experimental.make_csv_dataset( pattern, batch_size, CSV_COLUMNS, DEFAULTS ) dataset = dataset.map(features_and_labels).cache() if mode == "train": dataset = dataset.shuffle(1000).repeat() # take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(1) return dataset
notebooks/introduction_to_tensorflow/solutions/2a_dataset_api.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Train a model for MNIST without quantization aware training
# Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture. # TODO model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model # TODO model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=1, validation_split=0.1, )
courses/machine_learning/deepdive2/production_ml/solutions/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
See persistence of accuracy from TF to TFLite Define a helper function to evaluate the TF Lite model on the test dataset.
import numpy as np def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for i, test_image in enumerate(test_images): if i % 1000 == 0: print('Evaluated on {n} results so far.'.format(n=i)) # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. # TODO test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) print('\n') # Compare prediction results with ground truth labels to calculate accuracy. prediction_digits = np.array(prediction_digits) accuracy = (prediction_digits == test_labels).mean() return accuracy
courses/machine_learning/deepdive2/production_ml/solutions/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
See 4x smaller model from quantization You create a float TFLite model and then see that the quantized TFLite model is 4x smaller.
# Create float TFLite model. # TODO float_converter = tf.lite.TFLiteConverter.from_keras_model(model) float_tflite_model = float_converter.convert() # Measure sizes of models. _, float_file = tempfile.mkstemp('.tflite') _, quant_file = tempfile.mkstemp('.tflite') with open(quant_file, 'wb') as f: f.write(quantized_tflite_model) with open(float_file, 'wb') as f: f.write(float_tflite_model) print("Float model in Mb:", os.path.getsize(float_file) / float(2**20)) print("Quantized model in Mb:", os.path.getsize(quant_file) / float(2**20))
courses/machine_learning/deepdive2/production_ml/solutions/training_example.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
matrix generator for geting all possible combinations of matrix <pre> 0 | 3 | 5 1 | X | 6 2 | 4 | 7 </pre>
%%time allmatrices = list(product(*(repeat((0, 1), 8)))) print(len(allmatrices)) dictionary_matrix_to_num = {} dict_num_to_weights = {} nowalls = gridify(randomWalk(newpoints(npoints), [], MaxStep(68.66)), size) avgcorner = (nowalls[0,0]+nowalls[2,2]+nowalls[2,0]+nowalls[0,2])/4 avgwall = (nowalls[1,0]+nowalls[0,1]+nowalls[2,1]+nowalls[1,2])/4 nowalls[0,0], nowalls[2,2], nowalls[2,0], nowalls[0,2] = [avgcorner for i in range(4)] nowalls[1,0], nowalls[0,1], nowalls[2,1], nowalls[1,2] = [avgwall for i in range(4)] print(nowalls) for index, case in enumerate(allmatrices): dictionary_matrix_to_num[case] = index multiplier = np.ones((3,3)) if case[0] == 1: multiplier[0,0] = 0 if case[1] == 1: multiplier[1,0] = 0 if case[2] == 1: multiplier[2,0] = 0 if case[3] == 1: multiplier[0,1] = 0 if case[4] == 1: multiplier[2,1] = 0 if case[5] == 1: multiplier[0,2] = 0 if case[6] == 1: multiplier[1,2] = 0 if case[7] == 1: multiplier[2,2] = 0 if index%25 == 0: print(index, case) dict_num_to_weights[index] = nowalls*multiplier/(nowalls*multiplier).sum() a = dict_num_to_weights[145] print(a) plt.imshow(dict_num_to_weights[145]) plt.show() import pickle as pkl MyDicts = [dictionary_matrix_to_num, dict_num_to_weights] pkl.dump( MyDicts, open( "myDicts.p", "wb" ) ) #to read the pickled dicts use: # dictionary_matrix_to_num, dict_num_to_weights = pkl.load( open ("myDicts.p", "rb") )
testing other features/.ipynb_checkpoints/randomwalk2d-checkpoint.ipynb
sanchestm/gm-mosquito-sim
mit
from sklearn import datasets iris = datasets.load_iris() x = torch.tensor(iris.data, dtype=torch.float) y = torch.tensor(iris.target, dtype=torch.long) x.shape, y.shape
dataset = Dataset(config=dict(dataset_name='MNIST', data_dir='~/nta/results')) # build up a small neural network inputs = [] def init_weights(): W1 = torch.randn((4,10), requires_grad=True) b1 = torch.zeros(10, requires_grad=True) W2 = torch.randn((10,3), requires_grad=True) b2 = torch.zeros(3, requires_grad=True) return [W1, b1, W2, b2] # torch cross_entropy is log softmax activation + negative log likelihood loss_func = F.cross_entropy # simple feedforward model def model(input): W1, b1, W2, b2 = parameters x = input @ W1 + b1 x = F.relu(x) x = x @ W2 + b2 return x # calculate accuracy def accuracy(out, y): preds = torch.argmax(out, dim=1) return (preds == y).float().mean().item() from sklearn.model_selection import StratifiedKFold cv = StratifiedKFold(n_splits=3) # train lr = 0.01 epochs = 1000 for train, test in cv.split(x, y): x_train, y_train = x[train], y[train] x_test, y_test = x[test], y[test] parameters = init_weights() print("Accuracy before training: {:.4f}".format(accuracy(model(x), y))) for epoch in range(epochs): loss = loss_func(model(x_train), y_train) if epoch % (epochs/5) == 0: print("Loss: {:.8f}".format(loss.item())) # backpropagate loss.backward() with torch.no_grad(): for param in parameters: # update weights param -= lr * param.grad # zero gradients param.grad.zero_() print("Training Accuracy after training: {:.4f}".format(accuracy(model(x_train), y_train))) print("Test Accuracy after training: {:.4f}".format(accuracy(model(x_test), y_test))) print("---------------------------")
projects/dynamic_sparse/notebooks/kWinners-backup.ipynb
chetan51/nupic.research
gpl-3.0
Seems to be overfitting the model nicely. Actions: - Test accuracy - DONE - Repeat the experiment with a held out test set, still holds? - DONE - Replace RELU with k-Winners - is k-Winners working? - TODO - Extend to larger dataset, MNIST - Replace RELU with a class - Extend to larger model, CNNs - Run similar tests for both RELU and k-Winners - results hold?
import torch from torch import nn from torchvision import models class KWinners(nn.Module): def __init__(self, k=10): super(KWinners, self).__init__() self.duty_cycle = None self.k = 10 self.beta = 100 self.T = 1000 self.current_time = 0 def forward(self, x): # initialize duty cycle if self.duty_cycle is None: self.duty_cycle = torch.zeros_like(k) # keep track of number of past iteratctions if self.current_time < self.T: self.current_time += 1 # calculating threshold and updating duty cycle # should not be in the graph tx = x.clone().detach() # no need to calculate gradients with torch.set_grad_enabled(False): # get threshold # nonzero_mask = torch.nonzero(tx) # will need for sparse weights threshold = self._get_threshold(tx) # calculate boosting self._update_duty_cycle(mask) boosting = self._calculate_boosting() # get mask tx *= boosting mask = tx > threshold return x * mask def _get_threshold(self, x): """Calculate dynamic theshold""" abs_x = torch.abs(x).view(-1) pos = abs_x.size()[0] - self.k threshold, _ = torch.kthvalue(abs_x, pos) return threshold def _update_duty_cycle(self, mask): """Update duty cycle""" time = min(self.T, self.current_time) self.duty_cycle *= (time-1)/time self.duty_cycle += mask.float() / time def _calculate_boosting(self): """Calculate boosting according to formula on spatial pooling paper""" mean_duty_cycle = torch.mean(self.duty_cycle) diff_duty_cycle = self.duty_cycle - mean_duty_cycle boosting = (self.beta * diff_duty_cycle).exp() return boosting
projects/dynamic_sparse/notebooks/kWinners-backup.ipynb
chetan51/nupic.research
gpl-3.0
Defining the Network we have four features and three classes input layer must have 4 neurons (or units) output must have 3 neurons we'll add a single hidden layer (choose 16 neurons)
model = Sequential() model.add(Dense(16, input_shape=(4,))) model.add(Activation("sigmoid")) # define output layer model.add(Dense(3)) # softmax is used here, because there are three classes (sigmoid only works for two classes) model.add(Activation("softmax")) # define loss function and optimization model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
Nice! 14% more accurate than logistic regression. Although, you always have to wonder if we're overfitting... How about training with stochastic gradient descent?
stochastic_net = Sequential() stochastic_net.add(Dense(16, input_shape=(4,))) stochastic_net.add(Activation("sigmoid")) stochastic_net.add(Dense(3)) stochastic_net.add(Activation("softmax")) stochastic_net.compile(optimizer="sgd", loss="categorical_crossentropy", metrics=["accuracy"]) stochastic_net.fit(train_X, train_y_ohe, epochs=100, batch_size=1, verbose=0) loss, accuracy = stochastic_net.evaluate(test_X, test_y_ohe, verbose=0) print("Accuracy = {:.2f}".format(accuracy))
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
based on Mike William's Getting Started with Deep Learning on safaribooksonline Training a Neural Network to Classify Digits based on https://github.com/wxs/keras-mnist-tutorial/blob/master/MNIST%20in%20Keras.ipynb Load Handwritten Digits Data
from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() # show sample data for i in range(9): plt.subplot(3,3,i+1) plt.imshow(X_train[i], cmap='gray', interpolation='none') plt.title("Class {}".format(y_train[i])) X_train.shape
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
Transform the 28x28 images into vectors we can input into our neural network
X_train = X_train.reshape(60000, 784) X_test = X_test.reshape(10000, 784) X_train = X_train.astype('float32') X_test = X_test.astype('float32') # without scaling, the network performs very poorly (~40% accuracy) X_train /= 255 X_test /= 255
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
now we have an input vector of size 784, encoding each pixel Transform Output using One Hot Encoding
Y_train = np_utils.to_categorical(y_train, 10) Y_test = np_utils.to_categorical(y_test, 10)
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
Define a single layer Network
model = Sequential() # Hidden Layer model.add(Dense(512, input_shape=(784,))) # use a rectified linear unit as activation # bascially a line y =x for x ≥ 0; 0 otherwise model.add(Activation("relu"))
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
note, you can also add a dropout rate (say 0.2) to prevent overfiting
# Output model.add(Dense(10)) model.add(Activation("softmax")) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=["accuracy"]) model.fit(X_train, Y_train, batch_size=128, epochs=4, verbose=1) loss, accuracy = model.evaluate(X_test, Y_test, verbose=1) print("Accuracy = {:.2f}".format(accuracy))
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
This single layer network is 98% accurate—how amazing!
from IPython.core.display import HTML HTML(""" <style> div.text_cell_render h1 { /* Main titles bigger, centered */ font-size: 2.2em; line-height:1.4em; text-align:left; } div.text_cell_render h2 { /* Parts names nearer from text */ font-size: 1.8em; } div.text_cell_render { /* Customize text cells */ font-family: sans-serif; font-size:1.5em; } </style> """)
notebooks/A Neural Network Classifier using Keras.ipynb
marksibrahim/musings
mit
Initialize the data First we need to define a function that tells us the speed of the gas at a given distance from the center of the star or galaxy. We consider only three simple cases here, always based on the balance of gravitation and centrifugal force in a spherical mass distribution: $$ { v^2 \over r } = {{ G M(<r) } \over r^2} $$ or $$ v = \sqrt{ {G M(<r) } \over r} $$ Of course this implies (and that's what we eventually want to do) that for a giving rotation curve, $v$, we can find out the mass distribution: $$ G M(<r) = v^2 r $$
def velocity(radius, model='galaxy'): """describe the streaming velocity as function of radius in or around an object such as a star or a galaxy. We usually define the velocity to be 1 at a radius of 1. """ if model == 'star': # A star has a keplerian rotation curve. The planets around our sun obey this law. if radius == 0.0: return 0.0 else: return 1.0/np.sqrt(radius) elif model == 'galaxy': # Most disk galaxies have a flat rotation curve with a linear slope in the center. if radius > 1.0: # flat rotation curve outside radius 1.0 return 1.0 else: # solid body inside radius 1.0, linearly rising rotation curve return radius elif model == 'plummer': # A plummer sphere was an early 1900s description of clusters, and is also not # a bad description for the inner portions of a galaxy. You can also view it # as a hybrid and softened version of the 'star' and 'galaxy' described above. # Note: not quite 1 at 1 yet # return radius / (1+radius*radius)**0.75 return radius / (0.5+0.5*radius*radius)**0.75 else: return 0.0 #model = 'star' #model = 'galaxy' model = 'plummer' rad = np.arange(0.0,4.0,0.05) vel = np.zeros(len(rad)) # this also works: vel = rad * 0.0 for i in range(len(rad)): vel[i] = velocity(rad[i],model) print("First, peak and Last value:",vel[0],vel.max(),vel[-1])
notebooks/Lectures2017/Lecture4/Lecture4-02.ipynb
astroumd/GradMap
gpl-3.0
This curve of velocity as function of radius is called a Rotation Curve, and extracting such a curve from an observation is crucial to understanding the mass distribution within a galaxy, or the mass of the young star at the center of the disk. We are assuming the gas is on circular orbits, which turns out is not always correct for galaxies. However, for this experiment we will keep that assumption.
# set the inclination of the disk with the line of sigh inc = 60 # (0 means face-on, 90 means edge-on) # some helper variables cosi = math.cos(inc*math.pi/180.0) sini = math.sin(inc*math.pi/180.0) # radius of the disk, and steps in radius r0 = 4.0 dr = 0.1
notebooks/Lectures2017/Lecture4/Lecture4-02.ipynb
astroumd/GradMap
gpl-3.0
Backwards Projection This is where we take a point in the sky, and deproject back where in the galaxy this point came from and compute the velocity and projected velocity. The big advantage is the simplicity of computing the observable at each picked point in the sky. The big drawback is that the deprojection may not be trivial in cases where the model is not simple, e.g. non-circular motion and/or non-planar disks. Since we have a simple model here, let's take this approach. The so-called forward projection we would need to use some extra steps that only add to the complexity.
dr = 0.5 x = np.arange(-r0,r0,dr) y = np.arange(-r0,r0,dr) xx,yy = np.meshgrid(x,y) # helper variables for interpolations rr = np.sqrt(xx*xx+(yy/cosi)**2) if r0/dr < 20: plt.scatter(xx,yy) else: print("not plotting too many gridpoints/dimension",r0/dr)
notebooks/Lectures2017/Lecture4/Lecture4-02.ipynb
astroumd/GradMap
gpl-3.0
Although we have defined a function velocity to compute the rotation velocity at any radius, this function cannot easily compute from a numpy array, as we just created on a grid on the sky. Thus we need a convenience function to do just that. You could also try and modify the velocity function so it takes a numpy array as input, and return a numpy !!!
def velocity2d(rad2d, model): """ convenient helper function to take a 2d array of radii and return the same-shaped velocities """ (ny,nx) = rad2d.shape vel2d = rad2d.copy() # could also do np.zeros(nx*ny).reshape(ny,nx) for y in range(ny): for x in range(nx): vel2d[y,x] = velocity(rad2d[y,x],model) return vel2d vv = velocity2d(rr,model) vvmasked = np.ma.masked_where(rr>r0,vv) vobs = vvmasked * xx / rr * sini print("V_max:",vobs.max()) vmax = 1 vmax = vobs.max() if vmax > 0: plt.imshow(vobs,origin=['Lower'],vmin=-vmax, vmax=vmax) #plt.matshow(vobs,origin=['Lower'],vmin=-vmax, vmax=vmax) else: plt.imshow(vobs,origin=['Lower']) plt.colorbar()
notebooks/Lectures2017/Lecture4/Lecture4-02.ipynb
astroumd/GradMap
gpl-3.0
使用计算图正则化实现利用合成计算图的情感分类 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org上查看</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td> </table> 文本特征向量 此笔记本利用评论文本将电影评论分类为正面或负面评价。这是一个二元分类示例,也是一个重要且应用广泛的机器学习问题。 在此笔记本中,我们将通过根据给定的输入构建计算图来演示如何使用计算图正则化。当输入不包含显式计算图时,使用神经结构学习 (NSL) 框架构建计算图正则化模型的一般方法如下: 为输入中的每个文本样本创建嵌入向量。该操作可使用 word2vec、Swivel、BERT 等预训练模型来完成。 通过使用诸如“L2”距离、“余弦”距离等相似度指标,基于这些嵌入向量构建计算图。计算图中的节点对应于样本,计算图中的边对应于样本对之间的相似度。 基于上述合成计算图和样本特征生成训练数据。除原始节点特征外,所得的训练数据还将包含近邻特征。 使用 Keras 序列式、函数式或子类 API 作为基础模型创建神经网络。 使用 NSL 框架提供的 GraphRegularization 包装器类包装基础模型,以创建新的计算图 Keras 模型。这个新模型将包含计算图正则化损失作为其训练目标中的一个正规化项。 训练和评估计算图 Keras 模型。 注:我们预计读者阅读本教程所需时间为 1 小时左右。 要求 安装 Neural Structured Learning 软件包。 安装 tensorflow-hub。
!pip install --quiet neural-structured-learning !pip install --quiet tensorflow-hub
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
依赖项和导入
import matplotlib.pyplot as plt import numpy as np import neural_structured_learning as nsl import tensorflow as tf import tensorflow_hub as hub # Resets notebook state tf.keras.backend.clear_session() print("Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub version: ", hub.__version__) print( "GPU is", "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
IMDB 数据集 IMDB 数据集包含 Internet Movie Database 中的 50,000 条电影评论文本 。我们将这些评论分为两组,其中 25,000 条用于训练,另外 25,000 条用于测试。训练组和测试组是均衡的,也就是说其中包含相等数量的正面评价和负面评价。 在本教程中,我们将使用 IMDB 数据集的预处理版本。 下载预处理的 IMDB 数据集 TensorFlow 随附 IMDB 数据集。该数据集经过预处理,已将评论(单词序列)转换为整数序列,其中每个整数均代表字典中的特定单词。 以下代码可下载 IMDB 数据集(如已下载,则使用缓存副本):
imdb = tf.keras.datasets.imdb (pp_train_data, pp_train_labels), (pp_test_data, pp_test_labels) = ( imdb.load_data(num_words=10000))
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
参数 num_words=10000 会将训练数据中的前 10,000 个最频繁出现的单词保留下来。稀有单词将被丢弃以保持词汇量的可管理性。 探索数据 我们花一点时间来了解数据的格式。数据集经过预处理:每个样本都是一个整数数组,每个整数代表电影评论中的单词。每个标签是一个整数值(0 或 1),其中 0 表示负面评价,而 1 表示正面评价。
print('Training entries: {}, labels: {}'.format( len(pp_train_data), len(pp_train_labels))) training_samples_count = len(pp_train_data)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
评论文本已转换为整数,其中每个整数均代表字典中的特定单词。第一条评论如下所示:
print(pp_train_data[0])
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
电影评论的长度可能各不相同。以下代码显示了第一条评论和第二条评论中的单词数。由于神经网络的输入必须具有相同的长度,因此我们稍后需要解决长度问题。
len(pp_train_data[0]), len(pp_train_data[1])
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
将整数重新转换为单词 了解如何将整数重新转换为相应的文本可能非常实用。在这里,我们将创建一个辅助函数来查询包含整数到字符串映射的字典对象:
def build_reverse_word_index(): # A dictionary mapping words to an integer index word_index = imdb.get_word_index() # The first indices are reserved word_index = {k: (v + 3) for k, v in word_index.items()} word_index['<PAD>'] = 0 word_index['<START>'] = 1 word_index['<UNK>'] = 2 # unknown word_index['<UNUSED>'] = 3 return dict((value, key) for (key, value) in word_index.items()) reverse_word_index = build_reverse_word_index() def decode_review(text): return ' '.join([reverse_word_index.get(i, '?') for i in text])
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
现在,我们可以使用 decode_review 函数来显示第一条评论的文本:
decode_review(pp_train_data[0])
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
计算图构造 计算图的构造涉及为文本样本创建嵌入向量,然后使用相似度函数比较嵌入向量。 在继续之前,我们先创建一个目录来存储在本教程中创建的工件。
!mkdir -p /tmp/imdb
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
创建样本嵌入向量 我们将使用预训练的 Swivel 嵌入向量为输入中的每个样本创建 tf.train.Example 格式的嵌入向量。我们将以 TFRecord 格式存储生成的嵌入向量以及代表每个样本 ID 的附加特征。这有助于我们在未来能够将样本嵌入向量与计算图中的相应节点进行匹配。
pretrained_embedding = 'https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1' hub_layer = hub.KerasLayer( pretrained_embedding, input_shape=[], dtype=tf.string, trainable=True) def _int64_feature(value): """Returns int64 tf.train.Feature.""" return tf.train.Feature(int64_list=tf.train.Int64List(value=value.tolist())) def _bytes_feature(value): """Returns bytes tf.train.Feature.""" return tf.train.Feature( bytes_list=tf.train.BytesList(value=[value.encode('utf-8')])) def _float_feature(value): """Returns float tf.train.Feature.""" return tf.train.Feature(float_list=tf.train.FloatList(value=value.tolist())) def create_embedding_example(word_vector, record_id): """Create tf.Example containing the sample's embedding and its ID.""" text = decode_review(word_vector) # Shape = [batch_size,]. sentence_embedding = hub_layer(tf.reshape(text, shape=[-1,])) # Flatten the sentence embedding back to 1-D. sentence_embedding = tf.reshape(sentence_embedding, shape=[-1]) features = { 'id': _bytes_feature(str(record_id)), 'embedding': _float_feature(sentence_embedding.numpy()) } return tf.train.Example(features=tf.train.Features(feature=features)) def create_embeddings(word_vectors, output_path, starting_record_id): record_id = int(starting_record_id) with tf.io.TFRecordWriter(output_path) as writer: for word_vector in word_vectors: example = create_embedding_example(word_vector, record_id) record_id = record_id + 1 writer.write(example.SerializeToString()) return record_id # Persist TF.Example features containing embeddings for training data in # TFRecord format. create_embeddings(pp_train_data, '/tmp/imdb/embeddings.tfr', 0)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
构建计算图 现在有了样本嵌入向量,我们将使用它们来构建相似度计算图:此计算图中的节点将与样本对应,此计算图中的边将与节点对之间的相似度对应。 神经结构学习提供了一个计算图构建库,用于基于样本嵌入向量构建计算图。它使用余弦相似度作为相似度指标来比较嵌入向量并在它们之间构建边。它还支持指定相似度阈值,用于从最终计算图中丢弃不相似的边。在本示例中,使用 0.99 作为相似度阈值,使用 12345 作为随机种子,我们最终得到一个具有 429,415 条双向边的计算图。在这里,我们借助计算图构建器对局部敏感哈希 (LSH) 算法的支持来加快计算图构建。有关使用计算图构建器的 LSH 支持的详细信息,请参阅 build_graph_from_config API 文档。
graph_builder_config = nsl.configs.GraphBuilderConfig( similarity_threshold=0.99, lsh_splits=32, lsh_rounds=15, random_seed=12345) nsl.tools.build_graph_from_config(['/tmp/imdb/embeddings.tfr'], '/tmp/imdb/graph_99.tsv', graph_builder_config)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
在输出 TSV 文件中,每条双向边均由两条有向边表示,因此该文件共含 429,415 * 2 = 858,830 行:
!wc -l /tmp/imdb/graph_99.tsv
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
注:计算图质量以及与之相关的嵌入向量质量对于计算图正则化非常重要。虽然我们在此笔记本中使用了 Swivel 嵌入向量,但如果使用 BERT 等嵌入向量,可能会更准确地捕获评论语义。我们鼓励用户根据自身需求选用合适的嵌入向量。 样本特征 我们使用 tf.train.Example 格式为问题创建样本特征,并将其保留为 TFRecord 格式。每个样本将包含以下三个特征: id:样本的节点 ID。 words:包含单词 ID 的 int64 列表。 label:用于标识评论的目标类的单例 int64。
def create_example(word_vector, label, record_id): """Create tf.Example containing the sample's word vector, label, and ID.""" features = { 'id': _bytes_feature(str(record_id)), 'words': _int64_feature(np.asarray(word_vector)), 'label': _int64_feature(np.asarray([label])), } return tf.train.Example(features=tf.train.Features(feature=features)) def create_records(word_vectors, labels, record_path, starting_record_id): record_id = int(starting_record_id) with tf.io.TFRecordWriter(record_path) as writer: for word_vector, label in zip(word_vectors, labels): example = create_example(word_vector, label, record_id) record_id = record_id + 1 writer.write(example.SerializeToString()) return record_id # Persist TF.Example features (word vectors and labels) for training and test # data in TFRecord format. next_record_id = create_records(pp_train_data, pp_train_labels, '/tmp/imdb/train_data.tfr', 0) create_records(pp_test_data, pp_test_labels, '/tmp/imdb/test_data.tfr', next_record_id)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
使用计算图近邻增强训练数据 拥有样本特征与合成计算图后,我们可以生成用于神经结构学习的增强训练数据。NSL 框架提供了一个将计算图和样本特征相结合的库,二者结合可生成用于计算图正则化的最终训练数据。所得的训练数据将包括原始样本特征及其相应近邻的特征。 在本教程中,我们考虑无向边并为每个样本最多使用 3 个近邻,以使用计算图近邻来增强训练数据。
nsl.tools.pack_nbrs( '/tmp/imdb/train_data.tfr', '', '/tmp/imdb/graph_99.tsv', '/tmp/imdb/nsl_train_data.tfr', add_undirected_edges=True, max_nbrs=3)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
基础模型 现在,我们已准备好构建无计算图正则化的基础模型。为了构建此模型,我们可以使用在构建计算图时使用的嵌入向量,也可以与分类任务一起学习新的嵌入向量。在此笔记本中,我们将使用后者。 全局变量
NBR_FEATURE_PREFIX = 'NL_nbr_' NBR_WEIGHT_SUFFIX = '_weight'
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
超参数 我们将使用 HParams 的实例来包含用于训练和评估的各种超参数和常量。以下为各项内容的简要介绍: num_classes:有 2 个 类 - 正面和负面。 max_seq_length:在本示例中,此参数为每条电影评论中考虑的最大单词数。 vocab_size:此参数为本示例考虑的词汇量。 distance_type:此参数为用于正则化样本与其近邻的距离指标。 graph_regularization_multiplier:此参数控制计算图正则化项在总体损失函数中的相对权重。 num_neighbors:用于计算图正则化的近邻数。此值必须小于或等于调用 nsl.tools.pack_nbrs 时上文使用的 max_nbrs 参数。 num_fc_units:神经网络的全连接层中的单元数。 train_epochs:训练周期数。 batch_size:用于训练和评估的批次大小。 eval_steps:认定评估完成之前需要处理的批次数。如果设置为 None,则将评估测试集中的所有实例。
class HParams(object): """Hyperparameters used for training.""" def __init__(self): ### dataset parameters self.num_classes = 2 self.max_seq_length = 256 self.vocab_size = 10000 ### neural graph learning parameters self.distance_type = nsl.configs.DistanceType.L2 self.graph_regularization_multiplier = 0.1 self.num_neighbors = 2 ### model architecture self.num_embedding_dims = 16 self.num_lstm_dims = 64 self.num_fc_units = 64 ### training parameters self.train_epochs = 10 self.batch_size = 128 ### eval parameters self.eval_steps = None # All instances in the test set are evaluated. HPARAMS = HParams()
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
准备数据 评论(整数数组)必须先转换为张量,然后才能馈入神经网络。可以通过以下两种方式完成此转换: 将数组转换为指示单词是否出现的 0 和 1 向量,类似于独热编码。例如,序列 [3, 5] 将成为 10000-维向量,除了索引 3 和 5 为 1 之外,其余均为 0。然后,使其成为我们网络中的第一层(Dense 层),可以处理浮点向量数据。但是,此方法需要占用大量内存,需要 num_words * num_reviews 大小的矩阵。 另外,我们可以填充数组以使其均具有相同的长度,然后创建形状为 max_length * num_reviews 的整数张量。我们可以使用能够处理此形状的嵌入向量层作为网络中的第一层。 在本教程中,我们将使用第二种方法。 由于电影评论长度必须相同,因此我们将使用如下定义的 pad_sequence 函数来标准化长度。
def make_dataset(file_path, training=False): """Creates a `tf.data.TFRecordDataset`. Args: file_path: Name of the file in the `.tfrecord` format containing `tf.train.Example` objects. training: Boolean indicating if we are in training mode. Returns: An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example` objects. """ def pad_sequence(sequence, max_seq_length): """Pads the input sequence (a `tf.SparseTensor`) to `max_seq_length`.""" pad_size = tf.maximum([0], max_seq_length - tf.shape(sequence)[0]) padded = tf.concat( [sequence.values, tf.fill((pad_size), tf.cast(0, sequence.dtype))], axis=0) # The input sequence may be larger than max_seq_length. Truncate down if # necessary. return tf.slice(padded, [0], [max_seq_length]) def parse_example(example_proto): """Extracts relevant fields from the `example_proto`. Args: example_proto: An instance of `tf.train.Example`. Returns: A pair whose first value is a dictionary containing relevant features and whose second value contains the ground truth labels. """ # The 'words' feature is a variable length word ID vector. feature_spec = { 'words': tf.io.VarLenFeature(tf.int64), 'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1), } # We also extract corresponding neighbor features in a similar manner to # the features above during training. if training: for i in range(HPARAMS.num_neighbors): nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words') nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX) feature_spec[nbr_feature_key] = tf.io.VarLenFeature(tf.int64) # We assign a default value of 0.0 for the neighbor weight so that # graph regularization is done on samples based on their exact number # of neighbors. In other words, non-existent neighbors are discounted. feature_spec[nbr_weight_key] = tf.io.FixedLenFeature( [1], tf.float32, default_value=tf.constant([0.0])) features = tf.io.parse_single_example(example_proto, feature_spec) # Since the 'words' feature is a variable length word vector, we pad it to a # constant maximum length based on HPARAMS.max_seq_length features['words'] = pad_sequence(features['words'], HPARAMS.max_seq_length) if training: for i in range(HPARAMS.num_neighbors): nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words') features[nbr_feature_key] = pad_sequence(features[nbr_feature_key], HPARAMS.max_seq_length) labels = features.pop('label') return features, labels dataset = tf.data.TFRecordDataset([file_path]) if training: dataset = dataset.shuffle(10000) dataset = dataset.map(parse_example) dataset = dataset.batch(HPARAMS.batch_size) return dataset train_dataset = make_dataset('/tmp/imdb/nsl_train_data.tfr', True) test_dataset = make_dataset('/tmp/imdb/test_data.tfr')
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
构建模型 神经网络是通过堆叠层创建的,这需要确定两个主要架构决策: 在模型中使用多少个层? 为每个层使用多少个隐藏单元? 在本示例中,输入数据由单词索引数组组成。要预测的标签为 0 或 1。 在本教程中,我们将使用双向 LSTM 作为基础模型。
# This function exists as an alternative to the bi-LSTM model used in this # notebook. def make_feed_forward_model(): """Builds a simple 2 layer feed forward neural network.""" inputs = tf.keras.Input( shape=(HPARAMS.max_seq_length,), dtype='int64', name='words') embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs) pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(embedding_layer) dense_layer = tf.keras.layers.Dense(16, activation='relu')(pooling_layer) outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer) return tf.keras.Model(inputs=inputs, outputs=outputs) def make_bilstm_model(): """Builds a bi-directional LSTM model.""" inputs = tf.keras.Input( shape=(HPARAMS.max_seq_length,), dtype='int64', name='words') embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, HPARAMS.num_embedding_dims)( inputs) lstm_layer = tf.keras.layers.Bidirectional( tf.keras.layers.LSTM(HPARAMS.num_lstm_dims))( embedding_layer) dense_layer = tf.keras.layers.Dense( HPARAMS.num_fc_units, activation='relu')( lstm_layer) outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer) return tf.keras.Model(inputs=inputs, outputs=outputs) # Feel free to use an architecture of your choice. model = make_bilstm_model() model.summary()
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
按顺序有效堆叠层以构建分类器: 第一层为接受整数编码词汇的 Input 层。 第二层为 Embedding 层,该层接受整数编码词汇并查找嵌入向量中的每个单词索引。在模型训练时会学习这些向量。向量会向输出数组添加维度。得到的维度为:<code>(batch, sequence, embedding)</code>。 接下来,双向 LSTM 层会为每个样本返回固定长度的输出向量。 此固定长度的输出向量穿过一个包含 64 个隐藏单元的全连接 (Dense) 层。 最后一层与单个输出节点密集连接。利用 sigmoid 激活函数,得出此值是 0 到 1 之间的浮点数,表示概率或置信度。 隐藏单元 上述模型在输入和输出之间有两个中间(或称“隐藏”)层(不包括 Embedding 层)。输出(单元、节点或神经元)的数量是层的表示空间的维度。换言之,即网络学习内部表示时允许的自由度。 模型的隐藏单元越多(更高维度的表示空间)和/或层越多,则网络可以学习的表示越复杂。但是,这会导致网络的计算开销增加,并且可能导致学习不需要的模式——提高在训练数据(而不是测试数据)上的性能的模式。这就叫过拟合。 损失函数和优化器 模型训练需要一个损失函数和一个优化器。由于这是二元分类问题,并且模型输出概率(具有 Sigmoid 激活的单一单元层),我们将使用 binary_crossentropy 损失函数。
model.compile( optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
创建验证集 训练时,我们希望检验该模型在未见过的数据上的准确率。为此,需要将原始训练数据中的一部分分离出来,创建一个验证集。(为何现在不使用测试集?因为我们的目标是仅使用训练数据开发和调整模型,然后只使用一次测试数据来评估准确率)。 在本教程中,我们将大约 10% 的初始训练样本(25000 的 10%)作为用于训练的带标签数据,其余作为验证数据。由于初始训练/测试数据集以 50/50 的比例拆分(每个数据集 25000 个样本),因此我们现在的有效训练/验证/测试数据集拆分比例为 5/45/50。 请注意,“train_dataset”已进行批处理并且已打乱顺序。
validation_fraction = 0.9 validation_size = int(validation_fraction * int(training_samples_count / HPARAMS.batch_size)) print(validation_size) validation_dataset = train_dataset.take(validation_size) train_dataset = train_dataset.skip(validation_size)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型。 以 mini-batch 训练模型。训练时,基于验证集监测模型的损失和准确率:
history = model.fit( train_dataset, validation_data=validation_dataset, epochs=HPARAMS.train_epochs, verbose=1)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
评估模型 现在,我们来看看模型的表现。模型将返回两个值:损失(表示错误的数字,值越低越好)和准确率。
results = model.evaluate(test_dataset, steps=HPARAMS.eval_steps) print(results)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
Create a graph of accuracy/loss over time model.fit() 会返回包含一个字典的 History 对象。该字典包含训练过程中产生的所有信息:
history_dict = history.history history_dict.keys()
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
其中有四个条目:每个条目代表训练和验证过程中的一项监测指标。我们可以使用这些指标来绘制用于比较的训练和验证图表,以及训练和验证准确率图表:
acc = history_dict['accuracy'] val_acc = history_dict['val_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "-r^" is for solid red line with triangle markers. plt.plot(epochs, loss, '-r^', label='Training loss') # "-b0" is for solid blue line with circle markers. plt.plot(epochs, val_loss, '-bo', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend(loc='best') plt.show() plt.clf() # clear figure plt.plot(epochs, acc, '-r^', label='Training acc') plt.plot(epochs, val_acc, '-bo', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='best') plt.show()
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
请注意,训练损失会逐周期下降,而训练准确率则逐周期上升。使用梯度下降优化时,这是预期结果——它应该在每次迭代中最大限度减少所需的数量。 计算图正则化 现在,我们已准备好尝试使用上面构建的基础模型来执行计算图正则化。我们将使用神经结构学习框架提供的 GraphRegularization 包装器类来包装基础 (bi-LSTM) 模型以包含计算图正则化。训练和评估计算图正则化模型的其余步骤与基础模型相似。 创建计算图正则化模型 为了评估计算图正则化的增量收益,我们将创建一个新的基础模型实例。这是因为 model 已完成了几次训练迭代,重用这个经过训练的模型来创建计算图正则化模型对于 model 的比较而言,结果将有失偏颇。
# Build a new base LSTM model. base_reg_model = make_bilstm_model() # Wrap the base model with graph regularization. graph_reg_config = nsl.configs.make_graph_reg_config( max_neighbors=HPARAMS.num_neighbors, multiplier=HPARAMS.graph_regularization_multiplier, distance_type=HPARAMS.distance_type, sum_over_axis=-1) graph_reg_model = nsl.keras.GraphRegularization(base_reg_model, graph_reg_config) graph_reg_model.compile( optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型。
graph_reg_history = graph_reg_model.fit( train_dataset, validation_data=validation_dataset, epochs=HPARAMS.train_epochs, verbose=1)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
评估模型
graph_reg_results = graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps) print(graph_reg_results)
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
创建准确率/损失随时间变化的图表
graph_reg_history_dict = graph_reg_history.history graph_reg_history_dict.keys()
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
字典中共有五个条目:训练损失、训练准确率、训练计算图损失、验证损失和验证准确率。我们可以共同绘制这些条目以便比较。请注意,计算图损失仅在训练期间计算。
acc = graph_reg_history_dict['accuracy'] val_acc = graph_reg_history_dict['val_accuracy'] loss = graph_reg_history_dict['loss'] graph_loss = graph_reg_history_dict['scaled_graph_loss'] val_loss = graph_reg_history_dict['val_loss'] epochs = range(1, len(acc) + 1) plt.clf() # clear figure # "-r^" is for solid red line with triangle markers. plt.plot(epochs, loss, '-r^', label='Training loss') # "-gD" is for solid green line with diamond markers. plt.plot(epochs, graph_loss, '-gD', label='Training graph loss') # "-b0" is for solid blue line with circle markers. plt.plot(epochs, val_loss, '-bo', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend(loc='best') plt.show() plt.clf() # clear figure plt.plot(epochs, acc, '-r^', label='Training acc') plt.plot(epochs, val_acc, '-bo', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='best') plt.show()
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
半监督学习的能力 当训练数据量很少时,半监督学习(更具体地说,即本教程背景中的计算图正则化)将非常实用。可通过利用训练样本之间的相似度来弥补缺乏训练数据的不足,这在传统的监督学习中是无法实现的。 我们将监督比率定义为训练样本与样本总数(包括训练样本、验证样本和测试样本)之间的比率。在此笔记本中,我们使用了 0.05 的监督比率(即带标签数据的 5%)来训练基础模型和计算图正则化模型。我们在下面的单元中展示了监督比率对模型准确率的影响。
# Accuracy values for both the Bi-LSTM model and the feed forward NN model have # been precomputed for the following supervision ratios. supervision_ratios = [0.3, 0.15, 0.05, 0.03, 0.02, 0.01, 0.005] model_tags = ['Bi-LSTM model', 'Feed Forward NN model'] base_model_accs = [[84, 84, 83, 80, 65, 52, 50], [87, 86, 76, 74, 67, 52, 51]] graph_reg_model_accs = [[84, 84, 83, 83, 65, 63, 50], [87, 86, 80, 75, 67, 52, 50]] plt.clf() # clear figure fig, axes = plt.subplots(1, 2) fig.set_size_inches((12, 5)) for ax, model_tag, base_model_acc, graph_reg_model_acc in zip( axes, model_tags, base_model_accs, graph_reg_model_accs): # "-r^" is for solid red line with triangle markers. ax.plot(base_model_acc, '-r^', label='Base model') # "-gD" is for solid green line with diamond markers. ax.plot(graph_reg_model_acc, '-gD', label='Graph-regularized model') ax.set_title(model_tag) ax.set_xlabel('Supervision ratio') ax.set_ylabel('Accuracy(%)') ax.set_ylim((25, 100)) ax.set_xticks(range(len(supervision_ratios))) ax.set_xticklabels(supervision_ratios) ax.legend(loc='best') plt.show()
site/zh-cn/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb
tensorflow/docs-l10n
apache-2.0
Neural machine translation with attention <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/tutorials/nmt_with_attention"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/nmt_with_attention.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/nmt_with_attention.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation based on Effective Approaches to Attention-based Neural Machine Translation. This is an advanced example that assumes some knowledge of: Sequence to sequence models TensorFlow fundamentals below the keras layer: Working with tensors directly Writing custom keras.Models and keras.layers While this architecture is somewhat outdated it is still a very useful project to work through to get a deeper understanding of attention mechanisms (before going on to Transformers). After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?" The resulting model is exportable as a tf.saved_model, so it can be used in other TensorFlow environments. The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating: <img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot"> Note: This example takes approximately 10 minutes to run on a single P100 GPU. Setup
!pip install tensorflow_text import numpy as np import typing from typing import Any, Tuple import tensorflow as tf from tensorflow.keras.layers.experimental import preprocessing import tensorflow_text as tf_text import matplotlib.pyplot as plt import matplotlib.ticker as ticker
third_party/tensorflow-text/src/docs/tutorials/nmt_with_attention.ipynb
nwjs/chromium.src
bsd-3-clause
Text preprocessing One of the goals of this tutorial is to build a model that can be exported as a tf.saved_model. To make that exported model useful it should take tf.string inputs, and retrun tf.string outputs: All the text processing happens inside the model. Standardization The model is dealing with multilingual text with a limited vocabulary. So it will be important to standardize the input text. The first step is Unicode normalization to split accented characters and replace compatibility characters with their ASCII equivalents. The tensroflow_text package contains a unicode normalize operation:
example_text = tf.constant('¿Todavía está en casa?') print(example_text.numpy()) print(tf_text.normalize_utf8(example_text, 'NFKD').numpy())
third_party/tensorflow-text/src/docs/tutorials/nmt_with_attention.ipynb
nwjs/chromium.src
bsd-3-clause
Text Vectorization This standardization function will be wrapped up in a preprocessing.TextVectorization layer which will handle the vocabulary extraction and conversion of input text to sequences of tokens.
max_vocab_size = 5000 input_text_processor = preprocessing.TextVectorization( standardize=tf_lower_and_split_punct, max_tokens=max_vocab_size)
third_party/tensorflow-text/src/docs/tutorials/nmt_with_attention.ipynb
nwjs/chromium.src
bsd-3-clause
The TextVectorization layer and many other experimental.preprocessing layers have an adapt method. This method reads one epoch of the training data, and works a lot like Model.fix. This adapt method initializes the layer based on the data. Here it determines the vocabulary:
input_text_processor.adapt(inp) # Here are the first 10 words from the vocabulary: input_text_processor.get_vocabulary()[:10]
third_party/tensorflow-text/src/docs/tutorials/nmt_with_attention.ipynb
nwjs/chromium.src
bsd-3-clause
That's the Spanish TextVectorization layer, now build and .adapt() the English one:
output_text_processor = preprocessing.TextVectorization( standardize=tf_lower_and_split_punct, max_tokens=max_vocab_size) output_text_processor.adapt(targ) output_text_processor.get_vocabulary()[:10]
third_party/tensorflow-text/src/docs/tutorials/nmt_with_attention.ipynb
nwjs/chromium.src
bsd-3-clause
The visible jumps in the plot are at the epoch boundaries. Translate Now that the model is trained, implement a function to execute the full text =&gt; text translation. For this the model needs to invert the text =&gt; token IDs mapping provided by the output_text_processor. It also needs to know the IDs for special tokens. This is all implemented in the constructor for the new class. The implementation of the actual translate method will follow. Overall this is similar to the training loop, except that the input to the decoder at each time step is a sample from the decoder's last prediction.
class Translator(tf.Module): def __init__(self, encoder, decoder, input_text_processor, output_text_processor): self.encoder = encoder self.decoder = decoder self.input_text_processor = input_text_processor self.output_text_processor = output_text_processor self.output_token_string_from_index = ( tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=output_text_processor.get_vocabulary(), mask_token='', invert=True)) # The output should never generate padding, unknown, or start. index_from_string = tf.keras.layers.experimental.preprocessing.StringLookup( vocabulary=output_text_processor.get_vocabulary(), mask_token='') token_mask_ids = index_from_string(['', '[UNK]', '[START]']).numpy() token_mask = np.zeros([index_from_string.vocabulary_size()], dtype=np.bool) token_mask[np.array(token_mask_ids)] = True self.token_mask = token_mask self.start_token = index_from_string(tf.constant('[START]')) self.end_token = index_from_string(tf.constant('[END]')) translator = Translator( encoder=train_translator.encoder, decoder=train_translator.decoder, input_text_processor=input_text_processor, output_text_processor=output_text_processor, )
third_party/tensorflow-text/src/docs/tutorials/nmt_with_attention.ipynb
nwjs/chromium.src
bsd-3-clause
Then we train the GNB model with SHOGUN:
X_train, Y_train = gen_samples(n_train) machine = sg.create_machine("GaussianNaiveBayes", labels=sg.create_labels(Y_train)) machine.train(sg.create_features(X_train))
doc/ipython-notebooks/multiclass/naive_bayes.ipynb
geektoni/shogun
bsd-3-clause
Run classification over the whole area to generate color regions:
delta = 0.1 x = np.arange(-20, 20, delta) y = np.arange(-20, 20, delta) X,Y = np.meshgrid(x,y) Z = machine.apply(sg.create_features(np.vstack((X.flatten(), Y.flatten())))).get("labels")
doc/ipython-notebooks/multiclass/naive_bayes.ipynb
geektoni/shogun
bsd-3-clause
Plot figure:
plt.figure(figsize=(8,5)) plt.contourf(X, Y, Z.reshape(X.shape), np.arange(0, len(models)+1)) plt.scatter(X_train[0,:],X_train[1,:], c=Y_train) plt.axis('off') plt.tight_layout()
doc/ipython-notebooks/multiclass/naive_bayes.ipynb
geektoni/shogun
bsd-3-clause
9. Audience Upload to GMP GMP and Google Ads Connector is used to upload audience data to GMP (e.g. Google Analytics, Campaign Manager) or Google Ads in an automatic and reliable way. Following sections provide high level guidelines on deploying and configuring GMP and Google Ads Connector. For detailed instructions on how to set up different GMP endpoints, refer to solution's README.md. Requirements This notebook requires BigQuery table containing scored audience list. Refer to 7.batch_scoring.ipynb for details on how to get scored audience. Import required modules
# Add custom utils module to Python environment import os import sys sys.path.append(os.path.abspath(os.pardir)) from IPython import display from utils import helpers
packages/propensity/09.audience_upload.ipynb
google/compass
apache-2.0
Deploy GMP and Google Ads Connector First clone the source code by executing below cell:
!git clone https://github.com/GoogleCloudPlatform/cloud-for-marketing.git
packages/propensity/09.audience_upload.ipynb
google/compass
apache-2.0
Next, exectute following two steps to deploy GMP and Google Ads Connector on your GCP project. Copy following content: bash cd cloud-for-marketing/marketing-analytics/activation/gmp-googleads-connector &amp;&amp; ./deploy.sh default_install Execute following cell to start a new Terminal session and paste above copied content to the Terminal. NOTE: This notebook uses Google Analytics Measurement Protocol API to demonstrate audience upload, thus choose 0 on Step 5: Confirm the integration with external APIs... during the installation process on the Terminal session. It takes about 3 minutes to setup audience uploader pipeline.
display.HTML('<a href="" data-commandlinker-command="terminal:create-new">▶Access Terminal◀︎</a>')
packages/propensity/09.audience_upload.ipynb
google/compass
apache-2.0